Intelligence Brain · Shadow Mode

Shadow mode — the 90-day pattern that earns AI the right to act

← Back to Intelligence Brain

Why shadow mode exists

Most AI rollouts I've seen go one of two ways. Either the system is bolted into a live process on day one and starts making decisions nobody can audit, or it sits in a sandbox forever because no one trusts it enough to let it near a real customer. Both fail.

Shadow mode is the middle path. The Intelligence Brain runs against your real data, on your real decisions, in real time — but it doesn't act. It observes. It records what it would have done. You compare that to what actually happened. After 90 days you have a body of evidence, not a vendor pitch, telling you whether the system is safe to let off the leash.

I built this in because I spent twenty years inside Tesco, Dunnes, and Oracle watching software get deployed on faith. Faith is not a control. Evidence is.

What "90 days" actually means

The 90-day number isn't arbitrary and it isn't marketing. It's the minimum window I've found where you can see:

  • A full month-end cycle (twice, ideally three times)
  • At least one edge case that wasn't in the training conversations
  • One staffing change, one process change, or one upstream system glitch — because something always happens in 90 days
  • Enough decision volume to make the agreement rate statistically meaningful rather than anecdotal

If your decision volume is low — say, a credit committee that meets twice a month — 90 days may not be enough and I'll tell you that. The point is to earn the right to act, not to hit a calendar date.

What the Brain does during shadow mode

During the shadow period, the Intelligence Brain sits behind your existing process and runs in parallel. For each decision your team or your existing systems make, it logs:

  • What it observed — the inputs, the context, the relevant policies
  • What it would have decided or recommended
  • The reasoning chain, in plain English, with the source documents and rules cited
  • The confidence level, and where that confidence came from
  • The actual human or system decision, once it lands

That last point matters. Shadow mode isn't the Brain marking its own homework. It's the Brain's output sitting next to your team's output, and a reviewer looking at both.

Three things you'll learn that you didn't expect

Every shadow deployment I've run has surfaced things the customer didn't ask for. Usually it's one of these:

  • Inconsistency between reviewers. Two people applying the same policy to similar cases and reaching different conclusions. Not because anyone is wrong — because the policy has gaps.
  • Decisions made on stale data. A team relying on a spreadsheet that was last refreshed three weeks ago, while the source system has moved on.
  • Policy drift. The written policy says one thing. The actual practice is something else. Usually the practice is more sensible. The policy needs updating.

None of this is a failure. It's the value of the audit-first approach — you find out how your business actually runs before you automate it.

What "earning the right to act" looks like

At the end of the shadow period you have a report. Not a dashboard, a report — something you can put in front of a regulator, a board, or an internal audit team. It tells you:

  • The agreement rate between the Brain and your existing process, broken down by decision type
  • Where the disagreements were, and which side was right when reviewed
  • The confidence-versus-accuracy curve — does the Brain know when it doesn't know?
  • The categories of decision where the Brain is ready to act, the categories where it should advise only, and the categories where it shouldn't be involved at all

From that report you and your team decide what moves to advisory mode, what moves to action mode with human approval, and what moves to action mode unsupervised. That decision is yours. The Brain doesn't promote itself.

Why this matters in regulated firms

If you're in financial services, healthcare, legal, or any other regulated vertical, your regulator will eventually ask how you validated the AI system before it touched a customer. "The vendor said it works" is not an answer. "We ran it in shadow for 90 days against 14,000 real decisions and here's the variance analysis" is an answer.

The Central Bank of Ireland, the FCA, and the EU AI Act all point in the same direction: you need evidence, traceability, and human oversight that's documented. Shadow mode is how the Intelligence Brain gives you those three things by default rather than as an afterthought.

It also gives your team time. Nobody wants AI dropped on them on a Monday morning with a memo. Ninety days of side-by-side running gives the people who'll actually use the system time to trust it, push back on it, and shape it.

What to do next

If you want to see how shadow mode fits into the wider on-premise architecture — the audit log, the policy layer, the connectors — read the Intelligence Brain overview. If you're in financial services specifically and want to see how this maps to Central Bank expectations and credit decisioning, the financial services page is the better starting point.

Either way, if you want to talk through whether your decision volume and process maturity make sense for a 90-day shadow, email me directly: mike@impt.io. I'll tell you straight whether it's a fit.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →