The problem with classical transition-risk models
A transition-risk model has to do something awkward: simulate the joint evolution of carbon prices, technology adoption curves, energy demand, and counterparty default probabilities, across decades, under policy regimes that don't yet exist. The standard tool is Monte Carlo — sample a scenario, run a balance-sheet projection, repeat a few hundred thousand times, take the tail. NGFS scenarios, ECB stress tests, and most internal climate VaR pipelines all reduce to this shape.
The trouble is dimensionality. A meaningful transition-risk path has to track at least: a stochastic carbon price, a stochastic abatement-cost curve per sector, electricity-mix transitions, fossil-asset stranding rates, and correlated macro factors. Each path is cheap; convergence is not. Tail estimates at the 99.5% level for a portfolio of even a few thousand obligors typically need 10⁶–10⁷ paths to stabilise, and the variance of the estimator scales as 1/√N. Halving the confidence interval costs four times the compute. By the time you've added pathwise sensitivities for hedging, you are looking at multi-day runs on large CPU clusters for one stress test.
Where quantum Monte Carlo changes the arithmetic
Quantum amplitude estimation gives a quadratic speedup over classical Monte Carlo: error scales as 1/N rather than 1/√N, where N is the number of oracle calls. For a tail-risk estimate that classically demands 10⁸ samples, the quantum-equivalent oracle budget is closer to 10⁴. That is the headline. The engineering reality is more careful.
The standard quantum Monte Carlo recipe for finance — building on the work of Rebentrost, Stamatopoulos, Woerner and others — is roughly: load a probability distribution into a register using a quantum generative model or a Grover-Rudolph-style preparation, encode the payoff function as a controlled rotation onto an ancilla, and apply iterative quantum amplitude estimation to extract the expectation value. For transition risk the "payoff" is a non-linear function of the path: a discounted loss given default, gated by whether a transition variable crosses some threshold.
The hard part is state preparation. Loading a 20-dimensional correlated distribution into a quantum register efficiently is itself a research problem. Approaches we are tracking and intend to support on the Ireland Quantum 100 stack include qGAN-based loaders, matrix-product-state preparation for low-bond-dimension distributions, and signal-processing methods (QSVT) for analytic distributions like multivariate log-normals.
Mapping a transition-risk workload onto a 100-qubit transmon device
Ireland Quantum 100 is a superconducting transmon machine on a heavy-hex topology, operating below 15 mK in a dilution refrigerator, with the SDK surface targeting OpenQASM 3, Qiskit, PennyLane and Cirq. The relevant constraints for a climate-finance workload are:
- Connectivity. Heavy-hex is sparse. Two-qubit gates are nearest-neighbour, so any algorithm with all-to-all entanglement — and amplitude estimation is one of them, because of the QFT-style inverse — pays a SWAP overhead. For a 20-qubit problem register plus ancillae, transpiled circuits typically inflate by a factor of 3–5× in two-qubit gate count.
- Coherence budget. Modern transmons run T1 and T2 in the 100–300 µs range; two-qubit gates are 200–500 ns. That gives a usable circuit depth in the low thousands of two-qubit layers before decoherence dominates. Iterative amplitude estimation is friendlier here than canonical AE because it avoids the QFT and uses shorter, repeated Grover-like circuits.
- Readout. Amplitude estimation extracts a single scalar per shot batch. Readout fidelity matters more than for variational algorithms — a 1% readout error directly biases the tail estimate.
- Error mitigation. Pre-FTQC, we lean on zero-noise extrapolation, probabilistic error cancellation, and Pauli twirling. The surface-code roadmap is real but lives behind logical-qubit overheads of ~1000:1; for the 100-physical-qubit generation, mitigated NISQ-style execution is the operating point.
A concrete circuit sketch for climate stress testing
Consider a simplified NGFS "Disorderly" scenario applied to a portfolio of high-emission obligors. The variables are: carbon price C_t (mean-reverting jump diffusion), sector emissions intensity E_s,t (declining with abatement investment), and counterparty EBITDA shock ΔY_i,t (correlated Gaussian). Loss is triggered when projected debt-service coverage falls below one.
The quantum circuit decomposes into four blocks. First, a state-preparation block U_dist that loads the joint distribution of (C, E, ΔY) at a discretised set of time points — typically 8–16 qubits per variable, depending on the precision needed in the tail. Second, an arithmetic block that computes the obligor-level loss using quantum adders and comparators, written against a fixed-point representation. Third, a comparator that flips an ancilla if the portfolio loss exceeds a threshold L*. Fourth, the iterative amplitude estimation routine that estimates the marked-state probability — which is exactly the exceedance probability we want.
For a 5-obligor toy problem at 8-bit precision, this is already a 40–60 qubit circuit before ancillae, and gate counts run into the tens of thousands once transpiled to heavy-hex. It is tight on a 100-qubit device but tractable for benchmarking. The realistic near-term play is hybrid: classical Monte Carlo for the bulk distribution, quantum amplitude estimation for the rare-event tail conditional on the bulk.
What this enables that classical can't
Three things, specifically. Faster convergence on tail estimates means a bank can run climate stress tests overnight rather than over a weekend, which changes the cadence at which transition-risk numbers feed into actual capital allocation. Higher-dimensional scenario coverage becomes feasible — current models often collapse correlation structure to keep classical Monte Carlo tractable; quantum sampling has no such pressure. Joint physical-and-transition risk becomes addressable, because the chemistry
Research collaboration or early access
Direct with Michael. No charge for the call.
Book a research call →