Ireland Quantum 100 · Technical brief

Quantum Monte Carlo for climate finance — the speed-up frontier

← Ireland Quantum overview

Climate finance is a Monte Carlo problem wearing a trench coat. Strip away the taxonomies and the disclosure frameworks and what you have is a very large stochastic integral: thousands of obligors, decades of horizon, dozens of physical and transition pathways, and a portfolio P&L that depends non-linearly on all of them. Classical Monte Carlo handles this — badly, slowly, and at energy cost that is itself becoming a climate problem. Quantum amplitude estimation offers a quadratic speed-up on the sampling cost, and that is the part of the story worth taking seriously. Everything else is marketing.

Why climate finance is a sampling problem first

A transition-risk model on a credit portfolio typically chains four stochastic layers. First, an integrated assessment model or a downscaled climate scenario produces correlated paths for carbon price, energy mix, and physical hazard intensity. Second, a sectoral pass-through translates those paths into revenue, cost, and capex shocks at obligor level. Third, an obligor-level financial model maps shocks into probability of default and loss given default. Fourth, the portfolio aggregator integrates over correlated defaults to produce expected loss, value at risk, and expected shortfall.

Each layer is differentiable in places and discontinuous in others. The defaults are rare events. The tails are where the regulator and the CFO actually live. Classical Monte Carlo converges as O(1/√N), which means halving the confidence interval costs four times the samples. For a tail estimate at the 99.5th percentile across a forty-year horizon with correlated obligors, the sample budget gets uncomfortable quickly, and the runtime is not the only constraint — the energy footprint of running these scenarios overnight, every night, across a banking book is non-trivial.

This is the regime where quantum amplitude estimation is interesting on paper. The convergence is O(1/N) in the number of oracle calls. That single exponent change is the entire commercial argument.

What quantum amplitude estimation actually does

Amplitude estimation, in its canonical form, takes an operator A that prepares a state where the amplitude of a "good" subspace encodes the quantity you want to estimate — say, the probability that portfolio loss exceeds a threshold. You then apply a Grover-like operator Q repeatedly and use phase estimation to read out the amplitude. The number of applications of A needed to reach a target precision ε scales as O(1/ε) rather than the classical O(1/ε²).

For climate-finance use, the practical recipe looks like this:

  • Encode the joint distribution of risk factors — carbon price paths, hazard indices, sectoral betas — into a quantum state via a parametrised loader. Quantum generative adversarial network loaders and Grover-Rudolph trees are both candidates, with very different cost profiles.
  • Build an arithmetic circuit that, controlled on the loaded state, computes portfolio loss into an ancilla register.
  • Mark the "tail" subspace via a comparator against a loss threshold.
  • Run iterative or maximum-likelihood amplitude estimation to read out the tail probability or, with conditional expectation tricks, the expected shortfall.

The honest part of this picture: the speed-up is real in the asymptotic limit. The unflattering part: the constant factors, the loader cost, and the depth of the arithmetic circuit are where the engineering is hard. On a 100-qubit superconducting transmon device with realistic two-qubit gate fidelities, you do not get the textbook scaling. You get something closer to it as you push gate fidelity up, error-mitigation schemes mature, and — eventually — surface-code logical qubits arrive.

Mapping the workload to a heavy-hex transmon device

Superconducting transmons on a heavy-hex lattice impose specific constraints that change how you write the circuit, not whether you can run it at all. Three matter for amplitude-estimation workloads in particular.

Connectivity tax. Heavy-hex has degree-3 vertices on average. The arithmetic part of the loss circuit — adders, comparators, multipliers — is dense in two-qubit gates. SWAP overhead is the silent cost. A naïve transpilation of a 30-qubit ripple-carry adder onto heavy-hex can double the two-qubit gate count. Layout-aware synthesis using something like Qiskit's SabreLayout with a workload-specific cost function is not optional; it is the difference between a circuit that fits in coherence and one that doesn't.

Coherence budget. At sub-15 mK, modern transmons reach T1 and T2 in the hundreds of microseconds. Two-qubit gates are tens to low-hundreds of nanoseconds. That gives you a depth budget — but the budget is for the whole computation, including state preparation and amplitude amplification rounds. Iterative amplitude estimation is the friendly variant here because it trades a longer classical loop for shorter quantum circuits, which suits noisy devices.

Readout fidelity. Measurement is still the weakest link on most transmon stacks. For amplitude estimation you only need a small register read out, but you need it well-calibrated. Readout error mitigation via tensored calibration matrices is mandatory; it is also expensive in shots, and the shot count interacts with the speed-up budget you were trying to claim in the first place.

Transition risk modelling: a concrete circuit sketch

Take a stylised transition-risk model. The risk factor is a single carbon price path discretised to n_c qubits. The sectoral exposure register encodes which of 2^n_s sectoral buckets an obligor sits in. A coupling block applies a sector-specific elasticity to produce an obligor-level shock, and a default model — say, a Merton-style threshold — flags default into a single qubit per obligor.

For a small but realistic test, with n_c = 5, n_s = 4, and ten obligors with one default-flag qubit each, you are already at around 19 qubits before ancillas. Add an arithmetic register for portfolio loss accumulation and the comparator ancillas, and you are comfortably into the 40-60 qubit range. That is exactly the regime where a 100-physical-qubit machine starts to be the right tool — not because you have logical qubits to spare, but because you have headroom to absorb ancillas, error-mitigation overhead, and SWAP routing without spilling off the device.

What you should not expect from this circuit on near-term hardware: a clean quadratic speed-up against a well-tuned classical CVA engine on a GPU. What you should expect, and what is worth measuring carefully: where the cross-over point sits as gate fidelities improve, and how the cost curve bends as error-mitigation overhead falls. Those measurements are the real deliverable of the next eighteen months.

What a sovereign machine changes for climate finance quantum work

Most quantum-cloud access today is metered, jurisdictionally awkward for regulated financial workloads, and slow when you want to iterate. Three things change when the device sits inside the same regulatory perimeter as the bank or the insurer running the workload.

First, data residency stops being a paperwork exercise. Portfolio-level loss distributions are commercially sensitive even before you start joining them to climate scenarios; running them through a foreign cloud quantum endpoint is a conversation nobody enjoys. Second, calibration cycles can be aligned to workload cycles rather than to a multi-tenant queue. For amplitude estimation that depends on consistent gate behaviour across thousands of shots, this matters more than headline qubit counts. Third, the chemistry workloads that share the machine — battery materials, carbon-capture sorbent screening, photovoltaic candidates — feed directly into the transition-risk story by tightening the technology-cost curves that the financial models assume. That feedback loop is the part of the Ireland Quantum 100 programme that I think is genuinely under-appreciated: chemistry and finance are the same workload at different layers.

For teams that want to go deeper on the chemistry side, the climate workloads roadmap covers the prioritisation in more detail.

Honest expectations for the next twelve months

Site fit-out runs through Q3 2026, the cryostat install through Q4, and first-light single-qubit work into Q1 2027. Multi-qubit access for early cohort users opens from Q2 2027. The amplitude-estimation work for climate finance will not produce a production CVA replacement on that timeline. What it can produce is a benchmarked, peer-reviewable comparison on a real superconducting device of iterative amplitude estimation against classical Monte Carlo for a representative transition-risk problem, with the cross-over points and error-mitigation costs documented honestly.

That is a more useful artefact than a press release. It tells a CRO when this technology becomes relevant for their actual book, and it tells a quant team where to start writing the code now so they are not learning Qiskit and OpenQASM 3 the week the queue opens.

Where to start this week

If you run climate-finance modelling and you want to be ready for amplitude-estimation workloads when sovereign capacity arrives: pick one tail-risk metric in your existing pipeline — expected shortfall on a transition-stressed sub-portfolio is a good candidate — and reimplement it as a small simulated amplitude-estimation circuit using Qiskit or PennyLane on a classical simulator. Twelve qubits is enough to see the scaling curve. Profile where the loader, the arithmetic, and the amplitude amplification each cost

Research collaboration or early access

Book a research call →