Intelligence Brain · Governance

Multi-agent governance for regulated industries

← Back to Intelligence Brain

Why governance is the actual product

When I started building the Intelligence Brain, the temptation was the same one everyone in this market falls for — slap a chat box on top of a vector store, call it enterprise AI, and ship. I didn't do that, because I spent twenty years inside regulated firms watching what happens when an unauditable system touches a customer. You get fined, or you get sued, or you get a Saturday morning phone call from a regulator. Sometimes all three.

So the Intelligence Brain isn't a chatbot with governance bolted on. The governance is the architecture. Every action a model takes is planned, executed, and reviewed by separate agents with separate responsibilities — and the whole chain is logged on-premise, on your hardware, under your control. That's what regulated AI has to look like if it's going to survive a Central Bank inspection or a DPC audit.

The planner, worker, auditor pattern

Multi-agent governance, in the Intelligence Brain, means three distinct roles running against every request:

  • The planner decomposes the user's intent into a sequence of steps. It doesn't touch data. It produces a plan, in structured form, that a human or an automated policy can read before anything else happens.
  • The worker executes the plan against your data sources — documents, databases, internal APIs. It is constrained to the steps the planner produced. It cannot freelance.
  • The auditor reviews both the plan and the worker's output before anything is returned. It checks for policy violations, data leakage, hallucinated citations, and out-of-scope answers. If the auditor rejects, the response is blocked or routed to a human.

This is not novel research. The pattern exists in academic literature and in a handful of frontier labs. What's missing in the market is a productised version that runs on a single appliance, inside your perimeter, against your documents, with a UI a compliance officer can actually use. That's the gap I'm filling.

What "regulated" actually demands

Firms in financial services, healthcare, legal, and public sector in Ireland are dealing with overlapping pressure: GDPR, the EU AI Act, DORA for financial entities, sector-specific guidance from the Central Bank, and the practical reality that their data cannot leave the jurisdiction — sometimes cannot leave the building.

For AI compliance in Ireland specifically, three things have to be demonstrable, not just claimed:

  • Data residency. The model, the index, the logs, and the prompts all stay on infrastructure you control. No cloud round-trips. No third-party telemetry.
  • Decision traceability. For any answer the system gave, you can reconstruct exactly which documents informed it, which agent did what, and what the auditor flagged or cleared.
  • Human override. A named human can intervene at the planner stage, the worker stage, or the auditor stage. The system never makes a final-mile decision without a recorded human in the loop where one is required.

If your AI vendor cannot give you all three, on paper, with screenshots, you do not have a regulated AI system. You have a liability.

How the audit trail is structured

Every interaction produces a signed, append-only record. The record contains: the original user prompt, the planner's decomposition, every document or row the worker retrieved, the worker's intermediate reasoning, the auditor's verdict, and the final response delivered to the user. Each entry is hashed and chained — so a tampered log breaks the chain and the next inspection catches it.

This matters because under the AI Act, providers and deployers of high-risk systems have to keep records that allow regulators to assess conformity. "We have logs somewhere" is not a defence. The Brain produces records that are structured, queryable, and exportable in a format your DPO and your external auditors can both work with.

What the planner-worker-auditor split actually prevents

Three failure modes that have already cost firms real money elsewhere:

  • Prompt injection that exfiltrates data. A document inside your corpus contains a hostile instruction. A single-agent system follows it. In the Brain, the worker has no authority to act outside the planner's plan, and the auditor checks the output against the original intent — so injected instructions get caught at the boundary.
  • Hallucinated citations. The auditor verifies that every cited source actually exists in the index and actually contains the claim being made. Anything else is rejected before the user sees it.
  • Scope creep. A user asks for a customer summary; the model decides to also look at salary data it has access to but shouldn't surface. The planner declares scope; the auditor enforces it. Out-of-scope retrieval is blocked and logged.

None of this is theoretical. These are the patterns that turn a pilot into a regulator's case study, and the multi-agent split exists specifically to stop them.

What to do next

If you're scoping AI for a regulated environment in Ireland and you want to see how the planner, worker, and auditor pattern behaves against your own documents, the right starting point is a structured assessment — not a demo, not a sandbox.

Read the Intelligence Brain overview for the full architecture, or if you're in financial services specifically, the financial services landing page covers the DORA and Central Bank angle in detail. Either page has a contact form that comes directly to me.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →