Ireland Quantum 100 · Technical brief

The control stack — from FPGA to Josephson junction

← Ireland Quantum overview

Most of the public conversation about quantum computing happens at the two extremes — the qubit physics at the bottom, and the algorithm at the top. The interesting engineering, the bit that actually decides whether a machine works on a Tuesday afternoon, sits in the middle. It is the control stack: a chain of FPGAs, DACs, attenuators, mixers, filters and coaxial lines that turns a Python instruction into a microwave pulse hitting a Josephson junction at fifteen millikelvin. Get it wrong and your fidelities collapse; get it right and you have a machine you can actually run experiments on. This article walks the stack top to bottom, the way we are building it for Ireland Quantum 100.

What the control stack actually has to do

A superconducting transmon is, electrically, a non-linear LC oscillator with a transition frequency typically somewhere between 4 and 8 GHz. To drive a single-qubit gate you send a shaped microwave pulse — usually a Gaussian or DRAG envelope a few tens of nanoseconds long — at that transition frequency. To do a two-qubit gate on a fixed-frequency architecture you drive a cross-resonance tone; on a tunable-coupler architecture you adjust a flux bias. To read the qubit out you send a probe tone at the readout resonator's frequency and measure the dispersive shift in the returning signal.

So the quantum control stack has four jobs, all happening in parallel across every qubit on the chip:

  • Generate precisely-shaped microwave pulses on demand, at GHz carrier frequencies, with nanosecond timing alignment between channels.
  • Acquire returning readout signals, demodulate them, integrate them and decide a 0 or 1.
  • Feed that decision back into the next pulse — for mid-circuit measurement, dynamic-circuit branching, and eventually surface-code syndrome extraction — within a coherence time.
  • Do all of the above without injecting thermal or shot noise that ruins the qubit you are trying to control.

That last point is the one most software people miss. Every wire going into the fridge is a thermal attack surface. The control stack is as much a heat-management problem as a signal-generation problem.

FPGA quantum control — why it has to be hardware

You cannot run a quantum control loop from a general-purpose CPU. The latency budget is too tight. T1 and T2 on a good transmon today sit in the hundreds of microseconds; a single round of surface-code syndrome extraction has to complete in something like a microsecond, and that includes pulse generation, readout integration, classical decoding, and the conditional next pulse. A round-trip through a kernel scheduler is already a coherence time wasted.

So the lowest-level real-time controller is an FPGA, or increasingly an RFSoC — an FPGA with high-speed data converters integrated on the same die. The FPGA holds:

  • A waveform memory containing pre-compiled pulse envelopes.
  • A sequencer that fires those envelopes against a deterministic clock, with sub-nanosecond resolution.
  • A demodulation pipeline that takes returning ADC samples, mixes them down, integrates against a kernel, and produces a single complex number per shot.
  • A small classical processor — sometimes a soft core, sometimes baked logic — that takes that number, compares against a threshold, and selects the next branch of the pulse program.

The compiler that targets this hardware is the bit most software engineers underestimate. Translating an OpenQASM 3 circuit, or a Qiskit Pulse schedule, into FPGA microcode is not a mechanical mapping. The compiler has to handle phase tracking on virtual-Z gates, frame changes when you re-use a drive line, deadtime padding to respect filter response, and careful scheduling so that simultaneous pulses on neighbouring qubits do not crosstalk in ways the calibration didn't see.

Arbitrary waveform generation and the analogue chain

The FPGA outputs digital samples. To get from there to a microwave tone hitting the chip, you go through an analogue chain that is the source of most of the practical engineering pain.

The classic approach is heterodyne: the DAC produces an intermediate-frequency envelope, typically a few hundred MHz wide, which is then up-converted using an IQ mixer fed by a local oscillator at the qubit frequency. The mixer's two inputs need to be in quadrature, the LO needs to be clean, and any imbalance produces a sideband and an LO leakage tone — both of which can drive unintended transitions on the chip if they happen to land near another qubit's frequency. Calibrating mixer skew is a recurring chore.

The newer approach, enabled by RFSoCs with sample rates that reach into the multi-GS/s range, is direct digital synthesis: the DAC produces the microwave tone directly, no mixer in the path. This removes a whole class of calibration problems. It moves them, of course — now you care about DAC linearity, image folding, and the steepness of your reconstruction filter — but the sum total is usually less misery.

Whichever route you take, arbitrary waveform generation is the right mental model. You are not playing a fixed sinusoid; you are playing a complex envelope that has been shaped to suppress leakage to the |2⟩ state (the DRAG correction), pre-distorted to compensate for the transfer function of the line itself, and timed to the picosecond against every other channel on the system.

Josephson control — pulses at the chip

By the time the signal arrives at the chip it has been attenuated by something like 60 dB across the fridge stages — necessary, because the line is otherwise a thermal short from room temperature down to the mixing chamber. The drive line couples capacitively to the transmon. The flux line, on tunable architectures, couples inductively to the SQUID loop and biases the Josephson energy.

This is where josephson control stops being abstract. The transmon's nonlinearity comes from the cosine potential of the Josephson junction; the anharmonicity is typically a few hundred MHz negative. That anharmonicity is what makes the qubit a qubit rather than a harmonic oscillator — it is what lets you address the |0⟩→|1⟩ transition without also driving |1⟩→|2⟩. Pulse shaping exists precisely to respect that finite anharmonicity. A square pulse has frequency content broad enough to drive both transitions; a Gaussian with a DRAG correction concentrates spectral weight on the wanted transition and actively cancels the unwanted one.

On the readout side, the probe tone hits the readout resonator, picks up a dispersive phase shift conditioned on the qubit state, and the reflected signal goes back up through a parametric amplifier at the still-cold stage, then a HEMT at the 4 K stage, then room-temperature amplification, before being digitised. The signal-to-noise budget of that chain is what sets your single-shot readout fidelity, and parametric amplifier saturation is one of those things that quietly kills fidelity in ways that look like everything else.

Calibration is the product

An uncalibrated quantum computer is a very expensive room heater. The control stack ships with — and is largely defined by — its calibration routines: Rabi to find drive amplitude, Ramsey to find detuning, DRAG calibration to suppress leakage, randomised benchmarking to measure gate fidelity, cross-entropy benchmarking, simultaneous-RB to catch crosstalk, T1 and T2 echo sequences run continuously to detect drift.

On a 100-qubit machine these routines have to run themselves. There is no operator turning a knob. The calibration graph — which experiments depend on which prior calibrations being valid, and how long each result remains trusted — is a piece of software infrastructure that matters as much as the compiler. We treat it that way. It is also the layer at which the system meets its users: the pulse-level access surfaced through Qiskit Pulse, OpenPulse, or PennyLane's hardware backends is only as honest as the calibration underneath it. More on how this connects to the wider Ireland Quantum 100 programme elsewhere on the site, including the choices around topology and the climate-workload cohort.

The feedback loop and the road to error correction

Everything above is the NISQ-era control stack — good enough to run variational chemistry, QAOA, and the kind of moderate-depth circuits where climate-relevant workloads currently live. The next step is real-time syndrome extraction for the surface code, and that is where the control stack stops being a peripheral and becomes the computer.

Surface-code decoding has to happen in the loop. Syndromes come off the chip every microsecond or so; a decoder — minimum-weight perfect matching, union-find, or a neural decoder — has to consume them and produce a Pauli frame update before the next round. That decoder cannot live on a host PC over PCIe. It lives on the FPGA fabric, or on a dedicated co-processor sharing memory with it. The control stack we are building is being scoped with this trajectory in mind even though first-light is single-qubit; you do not want to rebuild the lowest layer twice. The architectural choices around error-correction influence the FPGA topology from day one.

Where to start this week

If you want to get hands-on with this layer without owning a fridge: install Qiskit and look at the qiskit.pulse module, then read the OpenPulse specification — it will teach you more about what a control stack actually exposes than any number of architecture diagrams. If you are coming from an FPGA background, the open-source QICK framework runs on commodity RFSoC boards and gives you a real pulse-generation-and-readout pipeline you can poke at. Either route, the lesson is the same: the control stack is where the physics meets the software, and

Research collaboration or early access

Book a research call →