VQE is the algorithm I get asked about most when chemists hear we're standing up a hundred-qubit superconducting machine in Tipperary. The honest answer is that it's the most credible near-term workload we have, and also the one most over-sold in conference talks. So let me lay out what's actually reachable on a 100-qubit transmon device in the NISQ regime, where the wall is, and what a working chemistry team should attempt first.
What VQE actually is, mechanically
The variational quantum eigensolver is a hybrid loop. You prepare a parameterised quantum state — the ansatz — on the QPU, measure the expectation value of the molecular Hamiltonian by sampling, hand the energy back to a classical optimiser, and the optimiser proposes new parameters. Repeat until the energy stops decreasing. The variational principle guarantees the result is an upper bound on the true ground-state energy.
The quantum part does one job: it represents a wavefunction that's hard to write down classically and lets you measure observables on it cheaply. Everything else — Hamiltonian construction, fermion-to-qubit mapping (Jordan-Wigner, Bravyi-Kitaev, parity), parameter optimisation, post-processing — happens on a classical box. This is why VQE is considered the flagship NISQ-era chemistry algorithm: it tolerates noisy gates and shallow circuits in a way that, say, quantum phase estimation does not.
In practice on a transmon machine, you'll write the ansatz in OpenQASM 3 or build it through Qiskit Nature or PennyLane's quantum-chemistry module. The compiler maps logical qubits onto the heavy-hex topology, inserts SWAPs where two-qubit gates aren't physically adjacent, and you live or die by the depth of the resulting circuit.
What "100 qubits" means for molecules
Naive arithmetic says 100 qubits = 100 spin-orbitals = roughly 50 spatial orbitals = a respectable active space. That arithmetic is wrong in every direction that matters.
First, you don't get all 100 qubits for the wavefunction. Some are burnt on ancillas for measurement reduction, error mitigation, and — depending on your mapping — parity-check qubits you can taper off if symmetries allow. A practical rule on a 100-physical-qubit NISQ device is that 60-80 qubits are available to the chemistry payload after you account for the topology penalty and routing.
Second, the Hamiltonian term count scales as O(N⁴) in the spin-orbital count for a generic molecule, and each term needs measurement. Grouping commuting Pauli strings helps a lot, but you're still looking at thousands of measurement circuits per energy evaluation for anything beyond toy systems.
Third — and this is the binding constraint — the ansatz depth is what kills you. UCCSD (unitary coupled cluster, singles and doubles) at full chemical accuracy on a forty-orbital system produces a circuit that's nowhere near executable on current coherence times. So in the NISQ regime you use hardware-efficient ansätze or aggressively truncated chemistry-inspired ansätze (k-UpCCGSD, ADAPT-VQE) and accept that you're trading variational expressiveness for circuit depth that fits inside your T₂.
The depth budget on a transmon QPU
This is the engineering conversation that doesn't happen often enough. On a superconducting transmon system held at sub-15 mK in a dilution refrigerator, your two-qubit gate fidelity is typically the limiting factor — call it somewhere in the 99.0-99.7% region for cross-resonance or tunable-coupler gates on a well-calibrated device. Your single-qubit gates are an order of magnitude better and effectively free in the depth analysis.
Take a circuit with D layers of two-qubit gates across an N-qubit register. The probability that no two-qubit gate errored is roughly (1-ε)^(D·N/2) where ε is the per-gate infidelity. At ε = 0.5% and N = 60, you cross 50% total error budget at D ≈ 23 layers. That's your honest depth budget before error mitigation.
Now layer in the heavy-hex topology. Heavy-hex is genuinely a good choice — it suppresses crosstalk and gives clean syndrome extraction patterns for the surface-code roadmap — but for VQE it means logical-to-physical routing inserts SWAPs. A fully-entangling brick-wall ansatz on 60 qubits doesn't lay down on heavy-hex without overhead. You either accept the SWAP cost or you choose an ansatz topology that respects the connectivity, which is what serious VQE work does.
The result: on a 100-qubit transmon machine, the chemistry problems that fit comfortably within the depth budget are active spaces of roughly 20-40 spin-orbitals with carefully designed connectivity-aware ansätze. Anything bigger needs error mitigation to extend the effective coherence — zero-noise extrapolation, probabilistic error cancellation, or symmetry verification — and you pay for those in extra shots.
Error mitigation versus error correction — the timeline
For VQE in the next 18-24 months, mitigation is the answer, not correction. Surface-code logical qubits at chemical-accuracy thresholds need physical-qubit counts in the thousands per logical qubit. The path to fault-tolerant quantum chemistry runs through phase estimation, not VQE, and it's a 5-10 year horizon.
What does work today:
- Zero-noise extrapolation (ZNE): deliberately stretch the noise — typically by gate folding — run the circuit at several noise levels, extrapolate back to the zero-noise limit. Costs you 3-5x the shot count. Works well for smoothly-behaved observables like energy.
- Symmetry verification: the molecular Hamiltonian conserves particle number and spin. Measure those, throw away shots that violate them. Cheap, effective, and catches a lot of bit-flip-like errors.
- Measurement-error mitigation: calibrate the readout confusion matrix, invert it on the measured distributions. Standard practice.
- Probabilistic error cancellation (PEC): more powerful than ZNE in principle, but the shot overhead grows exponentially with circuit volume, so it's only practical for shallow circuits.
Stack these together and you can stretch the effective coherence considerably. You're not getting fault-tolerant accuracy, but you can get to chemical accuracy (1 kcal/mol) for small active spaces and useful relative energies for larger ones.
What's actually worth running
The trap with VQE is to pick a problem that's either too easy (FCI in an STO-3G basis on H₂ — solved a hundred times, demonstrates nothing) or too hard (full active space of an industrially-relevant catalyst — won't fit). The interesting band is problems where the active space is genuinely correlated, classical methods like CCSD(T) struggle or fail, and the quantum register can hold the relevant orbitals.
For the climate-chemistry cohort we're prioritising on the Ireland Quantum 100 machine, the candidate workloads cluster around:
- Carbon-capture sorbent chemistry: amine-CO₂ binding energetics where multi-reference character matters. Active spaces of 20-30 spin-orbitals around the binding site are tractable and chemically meaningful.
- Photovoltaic absorber screening: excited-state energies of candidate perovskite and organic absorbers via subspace-expansion VQE variants. Excited states are harder than ground states, but the active space requirements are similar.
- Battery cathode redox chemistry: transition-metal oxide active sites where strong correlation is the rule, not the exception. This is where classical CCSD(T) genuinely breaks.
- Catalytic intermediates for green hydrogen: single-atom and small-cluster catalyst chemistry where the active space is small enough to fit and the science is unsettled.
None of these will be solved end-to-end by VQE on a 100-qubit device. What VQE will do — credibly, in this hardware generation — is provide reference energies for active-space fragments that get embedded into a larger classical multiscale calculation. The quantum machine handles the part the classical machine can't, and the classical machine handles everything else. That's the architecture pattern, and if you're not designing toward it, you're going to be disappointed.
The classical optimiser is half the problem
One thing that surprises people new to VQE: the classical optimiser is often the bottleneck, not the QPU. You're optimising a noisy, non-convex landscape with hundreds to thousands of parameters. SPSA, COBYLA, and natural-gradient methods all have their failure modes. Barren plateaus — regions where gradients vanish exponentially with system size — are a real phenomenon and have killed more VQE runs than gate errors have.
The mitigations are known: chemistry-inspired ansätze rather than fully hardware-efficient ones, layer-wise training, parameter initialisation from classical pre-computation (e.g. CCSD amplitudes as starting points for UCC parameters), and ADAPT-VQE which grows the ansatz operator-by-operator based on gradient information. If you're planning serious VQE work on the upcoming machine, allocate as much engineering time to the optimiser stack as to the circuit.
Where to start this week
If you're a chemistry team thinking about VQE on the Ireland Quantum 100 machine, do three things this week. One: pick a real molecule from your act