Ireland Quantum 100 · Technical brief

Why 100 qubits is the right first target — and what we deliberately deferred

← Ireland Quantum overview

A hundred physical qubits will not factor RSA-2048, will not break SHA-256, and will not run Shor's algorithm on anything that keeps a cryptographer awake. I want to say that out loud at the top of this piece because the gap between what a 100 qubit machine actually does and what the press cycle says it does is the single biggest source of bad procurement decisions I see in Irish industry right now. The Ireland Quantum 100 target was not chosen because 100 is a round number that looks good on a slide. It was chosen because, in the current state of superconducting transmon engineering, it is the largest count where you can still be honest about what comes out the other end — and the smallest count where the answers start mattering for real chemistry.

The NISQ era is a constraint, not a marketing label

John Preskill's 2018 framing of the Noisy Intermediate-Scale Quantum era has aged better than almost any other prediction in the field. We are still firmly inside it. NISQ means three things that do not get repeated often enough: gate fidelities are good but not good enough for arbitrary-depth circuits, you do not yet have the physical qubit budget to wrap meaningful logical qubits in a surface code, and your useful circuit depth is bounded by T1 and T2 coherence times measured in tens to low hundreds of microseconds.

That last point is the one that drives the architecture. If your two-qubit gate takes a few hundred nanoseconds and your coherence is around 100 microseconds, your honest circuit depth is in the low thousands of gates before noise eats the signal. Every algorithmic choice — variational ansatz depth, Trotter step count, measurement strategy — has to live inside that budget. A 100 qubit machine that respects this budget produces useful chemistry. A 1,000 qubit machine that ignores it produces noise with a higher electricity bill.

Why 100, specifically

The case for 100 physical transmons as a first-light target rests on four things that have to all be true at once.

First, the dilution refrigerator thermal budget. Each qubit's control and readout lines carry heat down through the stages of the cryostat. At the mixing chamber stage you are working with cooling power measured in microwatts at sub-15 mK. A hundred qubits with proper input attenuation, isolators, and parametric amplifier readout sits comfortably inside the thermal envelope of a commercial dilution unit. Two hundred starts to require multiplexed readout schemes that are still maturing. A thousand needs cryo-CMOS control electronics that do not yet exist as an off-the-shelf product.

Second, the heavy-hex topology. IBM's choice of heavy-hexagonal lattice — qubits with degree-2 and degree-3 connectivity rather than the degree-4 of a square grid — was not aesthetic. It reduces frequency collisions and crosstalk in fixed-frequency transmon designs. At around 100 qubits you can lay out a heavy-hex patch that is large enough to host non-trivial logical experiments and small enough that you can characterise every qubit and every coupler individually before declaring the device fit for users.

Third, the calibration overhead. A working superconducting processor is not a thing you turn on. It is a thing you re-calibrate continuously: single-qubit gate amplitudes, DRAG coefficients, two-qubit gate parameters, readout discriminators, all drifting on timescales of hours. At 100 qubits this is tractable with current calibration stacks. Above a few hundred, calibration becomes a research problem in its own right, not a deployment problem.

Fourth, the application fit. The variational quantum eigensolver and its descendants — ADAPT-VQE, k-UpCCGSD, hardware-efficient ansätze — start producing chemistry results that are interesting to actual chemists at the 50-to-100 qubit scale, when you can encode molecules large enough to matter for catalysis questions but small enough that classical methods still give you a benchmark to argue with.

What we deliberately deferred

The deferrals are as important as the targets. I want to name them so nobody walks in expecting otherwise.

  • Logical qubits. Surface-code error correction wrapping a logical qubit needs roughly a thousand physical qubits per logical qubit at current code distances and physical error rates. We are not doing that on the first machine. The roadmap to logical qubits is real, but it is the second machine, not this one.
  • Cryogenic control electronics. Room-temperature AWGs and digitisers running through coax into the fridge is the boring, working answer. Cryo-CMOS is exciting and we are watching it. We are not betting first light on it.
  • Modular interconnect. Microwave or optical links between separate cryostats to scale beyond the single-fridge limit is a 2030s problem for us. Solving it on machine one would have killed machine one.
  • Photonic and trapped-ion alternatives. Both are credible architectures with real advantages — photonics for room-temperature operation, ions for native fidelity. We chose superconducting because the SDK ecosystem, the calibration know-how, and the supply chain for dilution refrigerators are mature in a way the alternatives are not yet.
  • Cryptographic workloads. Shor's algorithm on RSA-2048 needs millions of physical qubits. We are not pretending otherwise. If your interest in quantum is breaking encryption, this machine is not for you and neither is any other machine you can buy in this decade.

The chemistry case for the Ireland quantum target

The reason climate workloads sit at the front of the queue is that quantum chemistry is the one application class where the NISQ-era physics maps cleanly onto problems classical compute genuinely struggles with. Electronic structure of strongly correlated systems — transition-metal catalysts for CO₂ reduction, the active sites in nitrogenase, exotic battery cathode chemistries — scales factorially in the worst case for exact classical methods, and quantum hardware encodes the wavefunction natively rather than approximating it.

This is not speculative. VQE on small molecules has been demonstrated on every major superconducting platform. The question is not whether the algorithm works. The question is whether you can run it at a scale where the answer matters to a chemist who is not also a quantum physicist. A 100 qubit machine with honest gate fidelities and a calibration stack that keeps the device usable for hours at a time is the smallest configuration where that question gets a yes.

The integration with the IMPT carbon-removal stack is a direct line. Better candidate screening for direct-air-capture sorbents, better understanding of mineralisation pathways, faster iteration on photovoltaic absorber materials — these are not abstract benefits. They feed into supplier selection inside an offset stack that has to clear real verification standards. You can read more on that thread in the broader Ireland Quantum 100 programme overview.

How to read benchmark numbers honestly

If you are going to evaluate any 100 qubit machine — ours or anyone else's — there are a small number of metrics that matter and a much larger number that do not.

The ones that matter: median two-qubit gate fidelity measured by randomised benchmarking or cross-entropy benchmarking, T1 and T2 echo coherence times, single-shot readout fidelity, and circuit-layer operations per second (CLOPS) which captures end-to-end throughput including classical compilation. Quantum volume is useful but increasingly decoupled from utility at scale.

The ones that do not matter on their own: raw qubit count, peak gate fidelity on a hand-picked qubit, theoretical coherence time. Anybody quoting one number from this second list without the matching number from the first list is selling, not engineering.

For the algorithm side — picking ansätze, mapping fermions to qubits with Jordan-Wigner or Bravyi-Kitaev, choosing an optimiser that survives shot noise — the toolchain you want to be fluent in is Qiskit, PennyLane, and increasingly OpenQASM 3. If you are starting from zero, the practical entry point is the access and onboarding details for the cohort programme.

Where to start this week

If you are a researcher, a CTO, or a product lead trying to work out whether quantum belongs anywhere on your roadmap in the next three years, do this: pick one problem in your domain that you genuinely cannot solve well classically, write down the molecular or combinatorial structure of it on a single page, and bring that page to a conversation. Do not start with the hardware. Do not start with the SDK. Start with the problem, because the only honest test of whether a 100 qubit machine helps you is whether your problem fits inside the envelope I described above. If it does, the next twelve months are interesting. If it does not, I would rather tell you that now than in 2027.

Research collaboration or early access

Book a research call →