Ireland Quantum 100 · Grid Optimisation

Grid optimisation — when quantum beats classical for renewable dispatch

← Ireland Quantum overview

Why grid dispatch is the right early target for a 100-qubit machine

Renewable dispatch is, at its core, a constrained combinatorial problem with a continuous overlay. You have n generators (wind farms, solar arrays, hydro, batteries, demand-response contracts), m transmission corridors with thermal and stability limits, and a horizon of typically 5-minute settlement intervals out to 24-48 hours. The decision variables are mixed: binary commitment (is this unit on?), integer (transformer tap positions, HVDC set-points), and continuous (MW dispatch, reactive power). Add stochastic forecasts for wind and irradiance and the full problem — security-constrained unit commitment with stochastic optimal power flow, SCUC-SOPF — is NP-hard.

Classical solvers have been remarkable. MILP via Gurobi, CPLEX or HiGHS handles unit commitment for system operators today by aggressively exploiting structure: branch-and-cut, column generation, Benders decomposition. These work because grid problems have block-angular structure that lets you split the master schedule from the per-period dispatch. Where they struggle is the stochastic extension — when you want to co-optimise across thousands of scenarios of wind output — and the AC power-flow non-convexities. Operators today linearise (DC-OPF), then patch the result with a separate AC feasibility step. That patch is where money and CO₂ leak.

Where quantum optimisation actually has a shot

I want to be careful here. Most of the published "quantum grid optimisation" literature uses tiny IEEE 14-bus or 30-bus test systems and reports advantage that disappears the moment you scale. That is not the claim. The claim is narrower: there is a class of sub-problems inside grid dispatch where a 100-qubit superconducting machine, used as a co-processor inside a classical decomposition framework, plausibly outperforms the inner-loop classical heuristic by Q2 2027.

Three candidates:

  • QUBO-formulated unit commitment for islanded sub-networks. Map the binary commitment variables to a QUBO and run QAOA. For a sub-network of 60-90 thermal/storage units with rich coupling, this fits a heavy-hex 100-qubit chip if you accept depth-3 or depth-4 QAOA layers and use SWAP-aware compilation.
  • Scenario reduction for stochastic dispatch. Pick k representative wind scenarios from a population of 10,000. This is a quadratic assignment problem with a known QUBO encoding, and it is exactly the kind of small, hot inner kernel where a quantum sampler lives well alongside a classical L-shaped method.
  • AC-OPF warm-starting via variational eigensolvers. Use a VQE-style ansatz to find a good interior point for the non-convex AC-OPF, then hand to IPOPT. The quantum job here is not solving the OPF; it is producing a starting iterate that classical interior-point methods cannot easily reach.

What 100 transmons actually buy you

The Ireland Quantum 100 is a fixed-frequency transmon array on a heavy-hex topology, operated at sub-15 mK in a dilution refrigerator. Heavy-hex matters for grid problems because it gives you degree-3 connectivity, which is well-matched to the sparsity of distribution-network admittance matrices but a poor match to dense QUBOs. That has direct consequences for compilation.

For a 90-variable unit-commitment QUBO with average density 0.4, naive embedding onto heavy-hex requires roughly 3-4× qubit overhead from chain construction — which is why we are not running 90-variable problems on a 100-qubit machine. Realistic problem sizes for first-light customer workloads in Q2 2027 are 30-50 logical binary variables after pre-processing, embedded with ancilla budget for SWAP networks and parity checks. That is still operationally interesting: it covers most Irish distribution-substation reconfiguration problems and a meaningful chunk of EirGrid-scale unit-commitment sub-blocks under decomposition.

Two-qubit gate fidelity is the binding constraint. At depth-4 QAOA on 50 variables you are looking at roughly 600-1,200 two-qubit gates after transpilation. With 99.5% gate fidelity that is a survival probability in the single-digit percents — usable for sampling, not for exact ground-state recovery. The honest framing is: quantum-assisted heuristic, not quantum-exact solver. The output distribution is biased toward low-energy solutions; you take the best of N samples and verify classically.

The software stack for grid workloads

We are building toward an OpenQASM 3 front-end with Qiskit and PennyLane bindings, plus a Cirq compatibility path. The grid-specific layer sits above this:

  • A Pyomo / JuMP bridge so that operators expressing problems in the language they already use (MILP, GAMS-style) can offload the QUBO sub-problem without rewriting their entire model.
  • An automatic problem-decomposition pass that identifies which sub-blocks are quantum-amenable based on density, variable count, and constraint structure. Most of the model stays classical.
  • Classical fallbacks via simulated annealing and tensor-network contraction so that every workload runs even when the cryostat is in a maintenance window. Reproducibility matters more than novelty.
  • A surface-code roadmap for the back half of the decade. At 100 physical qubits we are pre-error-correction; the architecture decisions today (heavy-hex, fixed-frequency couplers, transverse-field readout) are made to keep the surface-code path open without rebuilding the chip.

Honest expectations for Q2 2027 customer access

When customer access opens, grid-optimisation workloads will be one of the priority cohorts alongside carbon-capture chemistry and battery materials. What a customer should expect at first-light multi-qubit operation:

  • Problem sizes in the 30-50 binary-variable range for QAOA, scaling toward 70-80 as gate fidelity matures through 2027.
  • Wall-clock times dominated by classical pre- and post-processing; the quantum portion is seconds, the embedding and verification minutes.
  • Benchmarks reported against Gurobi, simulated annealing, and tensor-network baselines on the same problem instance. No advantage claim without all three.
  • Sovereign data residency: workloads compile, run, and store results in Co. Tip

    Research collaboration or early access

    Direct with Michael. No charge for the call.

    Book a research call →