Ireland Quantum 100 · Technical brief

Hybrid quantum-classical loops — the architecture that ships

← Ireland Quantum overview

Anyone who has actually run a workload on today's quantum hardware knows the secret: the quantum part is small. The classical part — the optimiser, the parameter sweep, the error mitigation, the data marshalling — is where most of the engineering lives. The machines we are bringing online in Tipperary will not change that. They will sit inside a hybrid quantum-classical loop, and the quality of that loop will determine whether the workload is a research curiosity or something a chemist or grid engineer actually uses on a Tuesday morning.

Why hybrid is the only architecture that currently ships

A 100-qubit superconducting transmon processor running at sub-15 mK is a remarkable instrument, but it is not a general-purpose computer in the sense most people use the term. Coherence times sit in the tens to low hundreds of microseconds. Two-qubit gate fidelities on heavy-hex topologies are good but not perfect. Readout is probabilistic. The device is, in practical terms, a co-processor for sampling from quantum probability distributions that classical machines find expensive.

That is exactly the regime where hybrid quantum-classical algorithms earn their keep. Variational Quantum Eigensolver (VQE) for ground-state chemistry, Quantum Approximate Optimization Algorithm (QAOA) for combinatorial problems, and the broader family of variational quantum circuits all share the same skeleton: a parameterised circuit runs on the QPU, an expectation value comes back, a classical optimiser updates the parameters, and the loop repeats. The QPU is consulted, not commanded. Everything we build at Ireland Quantum 100 assumes this pattern as the default.

The honest framing for the next several years is NISQ — Noisy Intermediate-Scale Quantum. We do not yet have fault-tolerant logical qubits at useful counts. The job, then, is to build a NISQ workflow that extracts genuine value from imperfect hardware while the surface-code roadmap matures underneath it.

Anatomy of a working quantum-classical loop

Strip a hybrid run to its components and you get six layers, each with its own latency budget and failure modes:

  • Problem encoding. A chemistry Hamiltonian, a QUBO, or a kernel evaluation is mapped into a parameterised circuit. This is almost always a Python-side operation using Qiskit, PennyLane, or Cirq, emitting OpenQASM 3.
  • Circuit transpilation. The logical circuit is rewritten against the device's native gate set and heavy-hex connectivity. SWAP insertion, gate fusion, and pulse-level scheduling happen here.
  • Job submission. The transpiled circuit is queued at the control system, which drives the arbitrary waveform generators feeding microwave pulses down to the chip.
  • Execution and readout. Shots are taken — typically thousands per parameter point — and the dispersive readout returns bitstrings.
  • Post-processing. Measurement-error mitigation, zero-noise extrapolation, or probabilistic error cancellation runs on classical compute close to the QPU.
  • Optimiser update. A classical optimiser — SPSA, COBYLA, or a gradient method using the parameter-shift rule — proposes new parameters and the loop closes.

The naive view is that the QPU dominates the wall-clock. It rarely does. Queueing, transpilation, and the round-trip between optimiser and device are usually the bottleneck. A serious quantum integration effort starts by measuring those budgets honestly, not by reporting gate counts.

The latency wall and why co-location matters

If your optimiser lives in a notebook on a laptop in another country, every iteration of your VQE pays a network round-trip. Multiply that by a few hundred iterations and a few thousand shots per iteration and the loop turns into a slideshow. This is the single biggest reason production hybrid systems co-locate the classical optimiser with the control stack.

Modern quantum runtimes — IBM's primitives, AWS Braket Hybrid Jobs, the Qiskit Runtime model — exist precisely to collapse that loop. They let you ship the entire optimiser as a session that runs adjacent to the control system, so the classical inner loop never leaves the building. We are designing the Tipperary site on the same principle: a tightly-coupled classical compute layer next to the dilution refrigerator, with user code able to run either as remote-submitted circuits or as full sessions hosted on-site.

For climate workloads this matters more than it sounds. A VQE run on a non-trivial molecule — a CO₂ capture amine, say, or a candidate cathode material — can require thousands of expectation-value evaluations per geometry point, and you may want hundreds of geometry points. The difference between a 50 ms and a 500 ms iteration cost compounds into days versus weeks of calendar time.

Error mitigation as a first-class citizen of the loop

Until we have enough physical qubits to host meaningful logical qubits under a surface code, error mitigation is how we squeeze useful answers out of noisy hardware. It is not optional and it is not a footnote. It belongs inside the loop alongside the optimiser.

The techniques worth knowing:

  • Measurement-error mitigation. Characterise the readout confusion matrix and invert it. Cheap, almost always worth it.
  • Zero-noise extrapolation (ZNE). Run the circuit at deliberately amplified noise levels — by gate folding or pulse stretching — and extrapolate back to the zero-noise limit. Costs more shots, but works well for moderate-depth circuits.
  • Probabilistic error cancellation (PEC). Sample from a quasi-probability distribution that cancels noise in expectation. Higher overhead, stronger guarantees, requires good gate-set tomography.
  • Dynamical decoupling. Insert idle-qubit pulse sequences to suppress dephasing during gaps in the schedule. Almost free, often helpful.

The architectural point is that each of these techniques changes the shot budget and therefore the loop latency. A serious NISQ workflow exposes them as composable layers — you choose your mitigation stack the way you choose a compiler optimisation level — rather than baking them into one monolithic pipeline.

Where the classical side actually does the work

A common misconception is that the QPU does the heavy lifting. For most useful hybrid algorithms it does not. Consider VQE on a chemistry problem. The classical side is doing:

  • Hartree-Fock to get the starting orbitals.
  • Active-space selection to pick which orbitals go onto the QPU at all.
  • Fermion-to-qubit mapping (Jordan-Wigner, Bravyi-Kitaev, parity).
  • Hamiltonian grouping into commuting Pauli sets to reduce measurement overhead.
  • The optimiser itself, which on hard surfaces is non-trivial — SPSA, natural gradient, or quantum-aware Bayesian methods.
  • All of the error mitigation post-processing.

The QPU's job, in this whole stack, is to evaluate one expectation value per call. Everything else is classical. Designing for that reality is what separates a hybrid system that ships from one that demos. You build the classical pipeline first, treat the QPU as a pluggable expectation-value oracle, and validate the entire workflow against a simulator before any cryostat is involved. We cover this in more detail in our piece on climate workloads on early quantum hardware.

The roadmap from variational to fault-tolerant

Hybrid loops are not a permanent compromise — they are the bridge to fault tolerance. The surface code, which is the leading candidate for error correction on superconducting hardware, requires roughly a thousand physical qubits per logical qubit at useful code distances, and that ratio is the optimistic end. We will not reach that on a 100-qubit machine. What we will do is establish the entire software, scheduling, and operator pipeline so that when larger devices arrive, the workloads already exist.

In the meantime the practical milestones are:

  • Single-qubit and two-qubit gate calibration at production fidelities across the device.
  • Mid-circuit measurement and conditional reset, which unlocks dynamic circuits and the first practical error-correction demonstrations.
  • Logical qubit prototypes — small surface-code patches running alongside variational workloads on the same device.
  • Hardware-aware compilation that targets logical operations rather than physical ones.

Each of these is a change to the loop's contract, not a replacement of it. The optimiser still sits on the classical side. The QPU still gets consulted. The interface stays roughly the same — which is why investing now in a clean hybrid architecture pays off when fault tolerance arrives.

Where to start this week

If you want to be ready for sovereign quantum capacity in Tipperary by the time the first multi-qubit access opens up, the work to do this week is unglamorous and entirely classical. Pick a real problem from your domain. Encode it as a parameterised circuit in Qiskit or PennyLane. Run it end-to-end on a noisy simulator with realistic device coupling and gate errors. Profile the loop. Find out where your wall-clock actually goes. Add measurement-error mitigation. Add ZNE. Measure again. By the time hardware access

Research collaboration or early access

Book a research call →