Quantum mechanics and machine learning share a deep mathematical kinship — both deal with high-dimensional spaces, probability amplitudes, and optimisation over complex landscapes. This cluster explores both directions: classical ML that solves quantum problems (many-body physics, DFT, quantum control), and quantum computing as a new paradigm for ML. Full working code using PennyLane and PyTorch.
⚛ Neural Quantum States
⚡ VQE & Quantum Circuits
🔧 Error Correction ML
💻 PennyLane + PyTorch
AI for Physics Students ›
Cluster 5 ›
Cluster 6: Quantum ML
Cluster 5 worked at the atomic scale of condensed matter — crystal graphs, force fields, phase transitions. Cluster 6 goes deeper: into the quantum wavefunction itself, and then outward to ask whether quantum computers can accelerate ML. This is the most mathematically rich cluster in the series.
📋 What You Will Learn
- The Two Directions of Quantum ML
- Neural Quantum States (NQS) & VMC
- Variational Quantum Eigensolver (VQE)
- Quantum Classifiers with PennyLane
- Neural Network Quantum State Tomography
- Surface Code Error Correction with ML
- ML-Accelerated DFT (DeepMind DM21)
- Quantum Advantage: Honest Assessment
Section 1 — The Two Directions of Quantum Machine Learning
Quantum machine learning (QML) is often misunderstood as a single field. It is actually two distinct research programmes running in parallel, with very different goals, tools, and timelines. Conflating them leads to a lot of confusion — so let’s be precise from the start.
Direction A — Classical ML for Quantum Problems
Use classical neural networks to solve quantum physics problems: finding ground states of many-body Hamiltonians, approximating DFT exchange-correlation functionals, learning quantum control protocols, classifying quantum phases. The ML runs on classical hardware. The physics is quantum.
Tools: PyTorch, JAX, NetKet, FermiNet
Status: Production ready now. Used in published research.
Direction B — Quantum Computing for ML
Use quantum computers to run ML algorithms that might be faster or more expressive than classical methods: variational quantum circuits as ML models, quantum kernels for SVMs, quantum sampling. The hardware is quantum. The application is ML.
Tools: PennyLane, Qiskit, Cirq, Amazon Braket
Status: Early research. No practical quantum advantage demonstrated yet.
This cluster covers both directions. We start with Direction A — immediately useful, producing results today — before moving to Direction B with an honest assessment of where quantum-enhanced ML actually stands.
Section 2 — Neural Quantum States: Teaching Neural Networks the Wavefunction
The central problem of quantum many-body physics is finding the ground state of a Hamiltonian. For N spin-1/2 particles, the Hilbert space has dimension 2^N. For N=100 spins, that’s 2^100 ≈ 10^30 complex numbers. Storing and manipulating this vector is the quantum exponential wall — it is why many-body quantum mechanics is fundamentally hard.
Neural Quantum States (NQS), introduced by Carleo & Troyer (2017) in Science, use a neural network to represent the wavefunction. Instead of storing all 2^N amplitudes explicitly, the network takes a spin configuration as input and outputs the amplitude ψ(σ). The network has far fewer parameters, but can still represent highly entangled states that exact diagonalisation cannot handle for large N.
Training uses the Variational Monte Carlo (VMC) method: sample spin configurations σ from |ψ|², compute the local energy E_loc(σ), and minimise its expectation value. By the variational principle, 〈H〉 ≥ E_0 for any state, so minimising 〈H〉 pushes the network toward the true ground state.
Transverse-Field Ising Model: Complete NQS Implementation

Python — RBM Neural Quantum State: Variational Monte Carlo for TFIM ground state
# pip install torch (NetKet for production: pip install netket)
import torch, torch.nn as nn
import numpy as np
# ── Transverse-Field Ising Model Hamiltonian ───────────────────
class TFIM:
def __init__(self, n=10, J=1.0, h=1.0):
self.n, self.J, self.h = n, J, h
def local_energy(self, psi_net, sigma):
log_psi = psi_net(sigma.float())
E_loc = torch.zeros(sigma.shape[0])
# Diagonal ZZ terms
for i in range(self.n - 1):
E_loc = E_loc - self.J * sigma[:, i].float() * sigma[:, i+1].float()
# Off-diagonal: sigma_x flips spin i — ratio psi(flipped)/psi(sigma)
for i in range(self.n):
sf = sigma.clone()
sf[:, i] = sf[:, i] * (-1)
ratio = torch.exp(psi_net(sf.float()) - log_psi)
E_loc = E_loc - self.h * ratio
return E_loc
# ── RBM Wavefunction ───────────────────────────────────────────
# Restricted Boltzmann Machine: simplest NQS architecture
# Visible: spin config sigma. Hidden: learned correlations.
class RBMWavefunction(nn.Module):
def __init__(self, n_vis=10, n_hid=30):
super().__init__()
self.W = nn.Parameter(0.01 * torch.randn(n_hid, n_vis))
self.b = nn.Parameter(torch.zeros(n_vis))
self.c = nn.Parameter(torch.zeros(n_hid))
def forward(self, sigma):
# Returns log|psi(sigma)| — numerically stable
pre = self.c + sigma @ self.W.T
# log(2cosh(x)) = logsumexp([x, -x]) - log2
lcosh = torch.stack([pre, -pre]).logsumexp(dim=0) - np.log(2)
return (sigma @ self.b + lcosh.sum(dim=-1))
# ── VMC training loop ──────────────────────────────────────────
model = RBMWavefunction(n_vis=10, n_hid=30)
tfim = TFIM(n=10, J=1.0, h=1.0)
optim = torch.optim.Adam(model.parameters(), lr=5e-3)
for step in range(2000):
optim.zero_grad()
# Sample random spin configs
sigma = torch.randint(0, 2, (2000, 10)) * 2 - 1
E_loc = tfim.local_energy(model, sigma)
# VMC loss = — minimise variational energy upper bound
loss = E_loc.mean()
loss.backward(); optim.step()
if (step+1) % 400 == 0:
print(f"Step {step+1}: E/N = {E_loc.mean().item()/10:.4f} J")🧠 Concept: Why the Variational Principle works as a loss function
The variational principle states: for any normalised state |ψ〉, the energy 〈ψ|H|ψ〉 ≥ E_0 (true ground state energy). This gives us a natural loss function for quantum physics — minimise 〈H〉 and you approach the ground state. The neural network provides a flexible, compact parameterisation of |ψ〉, and gradient descent finds the optimal parameters. This is VMC: Quantum physics on classical hardware, powered by ML.NetKet: the NQS library 📖 Key ToolNetKet is the standard production library for neural quantum states, built on JAX. It provides RBMs, CNNs, transformers, and custom architectures as wavefunctions, with Metropolis sampling and VMC. For realistic molecules, DeepMind's FermiNet enforces fermionic antisymmetry and achieves near-exact energies.
Section 3 — The Variational Quantum Eigensolver (VQE)
The Variational Quantum Eigensolver runs the variational principle on a quantum computer. The wavefunction is prepared by a parameterised quantum circuit — a sequence of quantum gates with trainable rotation angles θ. The energy expectation value is measured by running the circuit many times and averaging the results.
The key loop: (1) prepare |ψ(θ)〉 on the quantum processor, (2) measure 〈H〉 via Pauli decomposition, (3) compute gradients using the parameter-shift rule, (4) update θ on a classical computer. Quantum hardware handles the exponentially large state space; classical hardware handles optimisation. This is called a quantum-classical hybrid algorithm.
VQE for the H₂ Molecule with PennyLane
Python — VQE for H₂: PennyLane + PyTorch, Jordan-Wigner mapping, Adam optimiser
# pip install pennylane pennylane-qchem pyscf
import pennylane as qml
from pennylane import qchem
import numpy as np
import torch
# ── Build H2 molecular Hamiltonian (Jordan-Wigner mapping) ─────
symbols = ["H", "H"]
coordinates = np.array([[0.0, 0.0, 0.0],
[0.0, 0.0, 0.74]]) # bond length 0.74 Angstrom
H, n_qubits = qchem.molecular_hamiltonian(
symbols, coordinates,
method="pyscf",
active_electrons=2, active_orbitals=2,
mapping="jordan_wigner",
)
print(f"H2 Hamiltonian: {n_qubits} qubits, {len(H.ops)} Pauli terms")
# ── Ansatz circuit: hardware-efficient for 4-qubit H2 ──────────
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev, interface="torch")
def circuit(params):
# Hartree-Fock initial state: occupy lowest 2 of 4 spin orbitals
qml.BasisState(np.array([1, 1, 0, 0]), wires=range(n_qubits))
# Entangling layers — CNOT + Rot parameterised by theta
for i in range(n_qubits - 1):
qml.CNOT(wires=[i, i+1])
for i in range(n_qubits):
qml.Rot(params[3*i], params[3*i+1], params[3*i+2], wires=i)
return qml.expval(H) # measure energy
# ── Optimise: parameter-shift rule gives exact quantum gradients ─
params = torch.tensor(
np.random.uniform(-np.pi, np.pi, n_qubits*3),
dtype=torch.float64, requires_grad=True)
optim = torch.optim.Adam([params], lr=0.1)
for step in range(150):
optim.zero_grad()
energy = circuit(params)
energy.backward() # parameter-shift under the hood
optim.step()
if (step+1) % 30 == 0:
print(f"Step {step+1:3d}: E = {energy.item():.6f} Hartree")
# H2 exact ground state: -1.136189 Hartree
# VQE should converge within chemical accuracy: |E_VQE - E_exact| < 0.00159 Ha
print(f"Final: {circuit(params).item():.6f} Ha (exact: -1.136189 Ha)")🧠 Concept: The Parameter-Shift Rule: Exact Quantum Gradients
How do you compute gradients through a quantum circuit? You cannot use autograd directly because quantum hardware is not differentiable software. The parameter-shift rule provides the solution: for gates of the form U(θ) = e^{-iθG}, the gradient is ∂E/∂θ = [E(θ + π/2) − E(θ − π/2)] / 2. This requires exactly two circuit evaluations per parameter — giving exact quantum gradients without finite differences. PennyLane implements this automatically.
Section 4 — Quantum Classifiers and Quantum Kernel Methods
Variational quantum circuits (VQCs) can be used as trainable classification models — a direct quantum analogue of a neural network. Data is encoded into qubit states via an encoding layer, processed by parameterised entangling gates, and then measured to produce a class prediction.
The connection to classical ML runs deeper through quantum kernel methods. Every quantum circuit implicitly defines a feature map φ(x) into Hilbert space. The inner product 〈φ(x)|φ(x’)〉 is a quantum kernel — potentially classically hard to compute but easy to estimate on a quantum processor. A classical SVM with a quantum kernel is currently the most theoretically grounded near-term quantum ML algorithm.
Python — VQC classifier + quantum kernel SVM with PennyLane
import pennylane as qml
import numpy as np
import torch
n_qubits = 4
n_layers = 3
dev = qml.device("default.qubit", wires=n_qubits)
# ── Angle encoding: map features to qubit rotations ────────────
def encode(x):
for i in range(n_qubits):
qml.RY(x[i] * np.pi, wires=i)
# ── Entangling layer: parameterised rotations + CNOT ladder ────
def ent_layer(w):
for i in range(n_qubits):
qml.Rot(w[i, 0], w[i, 1], w[i, 2], wires=i)
for i in range(n_qubits - 1):
qml.CNOT(wires=[i, i+1])
@qml.qnode(dev, interface="torch")
def vqc(x, weights):
encode(x)
for layer in weights:
ent_layer(layer)
return qml.expval(qml.PauliZ(0)) # -1 or +1 → class 0 or 1
# ── Quantum kernel: fidelity between two encoded states ─────────
@qml.qnode(dev)
def kernel_circuit(x1, x2):
encode(x1)
qml.adjoint(encode)(x2) # adjoint = inverse encoding of x2
return qml.probs(wires=range(n_qubits))
def quantum_kernel(x1, x2):
return kernel_circuit(x1, x2)[0] # P(|00...0>) = ||^2
# ── Build kernel matrix for SVM ──────────────────────────────
def kernel_matrix(X1, X2):
return np.array([[quantum_kernel(x1, x2) for x2 in X2] for x1 in X1])
# Usage with sklearn SVM:
# from sklearn.svm import SVC
# K_train = kernel_matrix(X_train, X_train)
# K_test = kernel_matrix(X_test, X_train)
# clf = SVC(kernel='precomputed').fit(K_train, y_train)
# acc = clf.score(K_test, y_test)
Section 5 — Quantum State Tomography with Neural Networks
Quantum state tomography (QST) reconstructs an unknown quantum state from repeated measurements. The challenge: for an n-qubit state, the density matrix has 4^n entries. For n=10 qubits that’s ~10^6 parameters. Naively reconstructing the full density matrix requires exponentially many measurements and exponential classical post-processing.
Neural network tomography solves this by learning a compressed representation. Instead of fitting the full density matrix, you train an NQS to match the measurement statistics. The network’s compact parameterisation provides automatic regularisation, preventing overfitting to finite measurement statistics. You can then estimate any observable from the learned state.
Python — Neural QST: learn quantum state from measurement data, estimate any observable
# Neural network quantum state tomography
# Input: bitstring measurement outcomes in various Pauli bases
# Output: learned wavefunction that reproduces measurement statistics
import torch, torch.nn as nn
import numpy as np
class NQSTomography(nn.Module):
def __init__(self, n_qubits=4, hidden=64):
super().__init__()
# Two-output net: real and imaginary parts of amplitude
self.net = nn.Sequential(
nn.Linear(n_qubits, hidden), nn.Tanh(),
nn.Linear(hidden, hidden), nn.Tanh(),
nn.Linear(hidden, 2), # [Re(psi), Im(psi)]
)
def log_prob(self, sigma):
out = self.net(sigma.float())
amp = torch.complex(out[:, 0], out[:, 1])
return 2.0 * (amp.abs() + 1e-10).log()
# ── Training: maximise likelihood of observed measurement bitstrings ──
def tomography_loss(model, bitstrings):
return -model.log_prob(bitstrings).mean()
# ── Estimate any observable via importance sampling ─────────────
def estimate_observable(model, obs_fn, n=10000, n_qubits=4):
sigma = torch.randint(0, 2, (n, n_qubits)).float()
log_w = model.log_prob(sigma)
weights = torch.exp(log_w - log_w.logsumexp(dim=0))
return (weights * obs_fn(sigma)).sum()
# With enough measurements, NQS tomography accurately reconstructs
# entanglement entropy, correlation functions, and any observable
# without storing the exponentially large density matrix explicitly
# Reference: Torlai et al. (2018) Nature Physics — original NN-QST paper
Section 6 — Quantum Error Correction with Machine Learning
Quantum computers are fragile. Qubits decohere, gates are noisy, and errors accumulate with every operation. Quantum error correction (QEC) encodes one logical qubit into many physical qubits, and uses syndrome measurements to detect errors without collapsing the logical state. The decoding problem — given a syndrome pattern, which correction to apply — is where ML has made major practical inroads.
The surface code is the leading QEC architecture for scalable quantum computing. Each logical qubit lives on a 2D lattice of d×d physical qubits. Syndromes are (d-1)×(d-1) grids of ±1 measurement outcomes. A CNN decoder takes the syndrome image as input and outputs the most likely logical error class — exactly the kind of pattern recognition that CNNs excel at.
Python — Surface code CNN decoder: syndrome image → logical error class
# Surface code ML decoder — CNN maps syndrome to logical error
import torch, torch.nn as nn
import numpy as np
# ── Simulate surface code syndromes ─────────────────────────────
def gen_syndromes(d=3, p=0.1, N=100000):
n_syn = 2 * d * (d - 1) # total syndrome bits
syn = np.random.randint(0, 2, (N, n_syn)).astype(np.float32)
labels= np.random.randint(0, 4, N) # I, X, Z, Y logical error
return syn, labels
# ── CNN decoder ─────────────────────────────────────────────────
class SurfaceDecoder(nn.Module):
def __init__(self, d=3):
super().__init__()
self.d = d
sz = d - 1
# Treat the (d-1)x(d-1) X and Z syndrome grids as 2-channel image
self.cnn = nn.Sequential(
nn.Conv2d(2, 32, 3, padding=1), nn.ReLU(),
nn.Conv2d(32, 64, 3, padding=1), nn.ReLU(),
nn.Conv2d(64, 128,3, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1), nn.Flatten()
)
self.head = nn.Sequential(
nn.Linear(128, 64), nn.ReLU(),
nn.Linear(64, 4) # I, X, Z, Y logical error classes
)
def forward(self, syndrome):
sz = self.d - 1
s2d = syndrome.reshape(-1, 2, sz, sz)
return self.head(self.cnn(s2d))
# ── Training ─────────────────────────────────────────────────
syn, labels = gen_syndromes(d=3, p=0.1, N=100000)
X = torch.tensor(syn); y = torch.tensor(labels)
model = SurfaceDecoder(d=3)
optim = torch.optim.AdamW(model.parameters(), lr=3e-4)
for epoch in range(30):
perm = torch.randperm(len(X))
for i in range(0, len(X), 512):
bx = X[perm[i:i+512]]; by = y[perm[i:i+512]]
optim.zero_grad()
loss = nn.CrossEntropyLoss()(model(bx), by)
loss.backward(); optim.step()
if (epoch+1) % 10 == 0:
print(f"Epoch {epoch+1}: loss = {loss.item():.4f}")
# ML decoders match MWPM threshold (~1% error rate) at 10-100x less latency
# Critical: real-time decoding must keep up with qubit coherence time (~ms)
# Google's below-threshold demonstration (Nature 2023) uses CNN decoder✅ Real-world Engineering ImpactGoogle’s landmark 2023 Nature paper demonstrated below-threshold quantum error correction for the first time on the surface code. Their real-time classical decoder runs alongside the quantum processor, correcting errors faster than decoherence. This is not theoretical QEC — it is operational engineering, and ML decoders are part of that pipeline right now.
Section 7 — ML-Accelerated Density Functional Theory (DFT)
Density Functional Theory is the dominant method of computational quantum chemistry and condensed matter physics — it is used in over 30,000 papers per year. The Hohenberg-Kohn theorem guarantees that the ground-state energy is a unique functional of the electron density ρ(r). In principle, this reduces the exponential many-body problem to a 3D function of space.
The practical problem: the exact exchange-correlation (XC) functional E_xc[ρ] is unknown. All DFT calculations use approximations (LDA, GGA, hybrid functionals), and the choice determines accuracy. DeepMind’s 2021 DM21 functional — a neural network trained on high-accuracy quantum chemistry data — outperforms all classical approximations on long-standing benchmark failures. It is freely available and represents a genuine breakthrough.
Python — ML exchange-correlation functional: neural network approximates E_xc[ρ]
# Simplified ML-XC functional inspired by DeepMind DM21
# Full implementation: github.com/google-deepmind/deepmind-research/density_functional_approximation_dm21
import torch, torch.nn as nn
import numpy as np
class NeuralXCFunctional(nn.Module):
def __init__(self):
super().__init__()
# Input: dimensionless local density features
# [rho^(1/3), reduced gradient s^2, kinetic ratio alpha]
self.net = nn.Sequential(
nn.Linear(3, 64), nn.GELU(),
nn.Linear(64, 128), nn.GELU(),
nn.Linear(128, 64), nn.GELU(),
nn.Linear(64, 1),
)
def forward(self, rho, grad_rho_sq, tau):
# Construct dimensionless SCAN-like density descriptors
rho_third = rho.abs().pow(1.0/3.0) + 1e-10
# Reduced gradient: s^2 = |nabla rho|^2 / (4*(3pi^2)^(2/3) * rho^(8/3))
C = (4.0 * (3.0 * np.pi**2)**(2.0/3.0))
s_sq = grad_rho_sq / (C * rho.abs().pow(8.0/3.0) + 1e-12)
# Kinetic ratio: alpha = (tau - tau_TF) / tau_TF
tau_TF = 0.3 * (3.0*np.pi**2)**(2.0/3.0) * rho.abs().pow(5.0/3.0)
alpha = (tau - tau_TF) / (tau_TF + 1e-10)
feats = torch.stack([rho_third, s_sq, alpha], dim=-1)
# Energy density = net output * rho (correct dimensionality)
return self.net(feats).squeeze(-1) * rho
def xc_energy(self, rho, grad_rho_sq, tau, dv):
# Integrate energy density over real-space grid: E_xc = sum e_xc * dV
e_xc = self.forward(rho, grad_rho_sq, tau)
return (e_xc * dv).sum() # total XC energy [Hartree]
# Training: minimise |E_xc^ML[rho] - E_xc^CCSD(T)[rho]|
# over a diverse set of molecular densities
# Key challenge: neural network must satisfy exact physical constraints:
# - Lieb-Oxford bound: E_xc >= -1.679 * integral(rho^(4/3))
# - Correct uniform electron gas limit
# - Self-interaction correction for one-electron systems
# DM21 enforces these as soft constraints during trainingDeepMind’s neural XC functional 💡 The DM21 BreakthroughKirkpatrick et al. (2021) trained a neural XC functional on fractional electron systems — a class of problems that has plagued DFT for 50 years. DM21 outperforms all traditional approximations on delocalization and static correlation benchmarks. The weights and code are open source at DeepMind’s GitHub. This is classical ML solving one of the deepest unsolved problems in quantum chemistry.
Section 8 — Quantum Advantage for ML: An Honest Assessment
The most important thing a physicist can contribute to quantum ML discussions is clarity. So let’s be direct: as of 2025, there is no demonstrated practical quantum advantage for machine learning on any real-world problem. The theoretical case is nuanced and the experimental reality is sobering. A student who understands why is better positioned than one who has memorised the hype.
✅ What Quantum ML Can Genuinely Offer
- Exponential state space without exponential memory — relevant for quantum chemistry
- Quantum kernels potentially classically hard to compute
- Native quantum data processing — when input is already quantum
- Long-term: fault-tolerant quantum computers may accelerate certain ML subroutines
❌ Current Limitations
- NISQ hardware: noise limits circuits to ~100 gates before errors dominate
- Barren plateaus: VQC gradients vanish exponentially with circuit size
- Data loading: encoding classical data into quantum states may erase any speedup
- Benchmark comparisons often use suboptimal classical baselines
🧠 Concept: The Barren Plateau Problem
The most serious theoretical obstacle for variational quantum ML is the barren plateau (McClean et al. 2018): for a randomly initialised deep quantum circuit, the gradient of the loss function vanishes exponentially with the number of qubits. For n=20 qubits, the gradient of a typical VQC is on the order of 2^{-20} ≈ 10^{-6}. Training is essentially impossible. Proposed solutions (layerwise initialisation, problem-specific ansatze, tensor network initialisation) partially mitigate this, but none fully resolve it. The barren plateau is an open research problem — not a solved engineering challenge.The honest student’s conclusion: Learn quantum computing because it is beautiful, will matter enormously, and has real applications now (VQE for chemistry, QEC decoding, quantum simulation). But do not design your research programme around near-term quantum ML advantage over classical deep learning. Direction A — classical ML for quantum problems — is where the most reliable progress is happening today.
External References & Further Reading
- Carleo & Troyer (2017) — Solving the quantum many-body problem with artificial neural networks. Science 355:602. arXiv:1606.02318 — The foundational NQS paper.
- Peruzzo et al. (2014) — A variational eigenvalue solver on a photonic quantum processor. Nature Communications. arXiv:1304.3061 — The original VQE paper.
- Pfau et al. (2020) — Ab initio solution of the many-electron Schrödinger equation with deep neural networks (FermiNet). Physical Review Research. arXiv:1909.02487
- Kirkpatrick et al. (2021) — Pushing the frontiers of density functionals by solving the fractional electron problem. Science 374:1385. doi/science.abj6511 — DM21 neural XC functional.
- Google Quantum AI (2023) — Suppressing quantum errors by scaling a surface code logical qubit. Nature 614:676. — Below-threshold error correction with ML decoder.
- McClean et al. (2018) — Barren plateaus in quantum neural network training landscapes. Nature Communications. arXiv:1803.11173
- PennyLane — pennylane.ai — Best QML framework. Tutorials cover VQE, QAOA, quantum kernels, hardware backends.
📋 Key Takeaways — Cluster 6
- Two directions, very different timelines. Classical ML for quantum problems is production-ready now. Quantum computing for ML is early research with no demonstrated practical advantage.
- NQS replaces 2^N amplitudes with a neural network. An RBM or transformer evaluates ψ(σ) on demand. VMC minimises 〈H〉 as a variational loss. Use NetKet for production; FermiNet for real molecules.
- VQE is quantum-classical hybrid. Quantum hardware prepares the state; classical optimiser updates θ. Parameter-shift rule gives exact quantum gradients from two circuit evaluations per parameter.
- QEC decoding is real engineering today. Google’s below-threshold surface code uses CNN decoders running in real time. This is operational infrastructure in current quantum hardware labs.
- DM21 solved a 50-year DFT problem. Neural XC functional outperforms all classical approximations on delocalization error. Open source, available today.
- Barren plateaus are the fundamental obstacle for deep VQCs. Gradients vanish exponentially with qubit count. Be sceptical of claims about training deep variational circuits on large problems. This is an open research problem.