How I actually deploy intelligence into a regulated business
Most AI projects fail in the same way. Someone buys a licence, a few people start using a chatbot, and six months later there's nothing to show for it except an invoice and a vague feeling that the technology "isn't ready". The technology is fine. The method is wrong.
I've spent the last two years working out what actually moves the needle when you're trying to put real intelligence into a regulated firm — a credit union, a broker, a manufacturer, a council. The answer isn't a clever prompt or a shinier model. It's a four-stage methodology I run on every Intelligence Brain deployment: ingest, structure, swarm, audit. Each stage has to be done in order. Skip one and the whole thing falls over.
Here's what each stage actually involves and why it matters.
Stage one — ingest: get the organisation's knowledge into one place
The first job is to gather every piece of operational knowledge that matters. Not the marketing brochure. The real stuff — policies, procedures, board minutes, regulatory correspondence, training material, system documentation, the spreadsheets people actually use, the email threads where decisions got made.
In a typical mid-sized regulated firm this is somewhere between two and twenty thousand documents, scattered across SharePoint, shared drives, email, a CRM, and three or four line-of-business systems. Some of it is twenty years old. A lot of it contradicts itself.
I don't try to clean any of it during ingest. The point of stage one is volume and coverage — get it all into the Brain, on-premise, encrypted, indexed. The contradictions and the dead wood get dealt with at the next stage. Trying to tidy as you ingest is how a six-week project becomes a two-year project.
What you end up with at the end of stage one is a complete, searchable record of everything the organisation knows about itself. That alone is usually a revelation. Most firms have never had this.
Stage two — structure: turn documents into knowledge
Raw documents are not knowledge. A 47-page credit policy is not the same thing as the answer to "can I lend to this member". Stage two is where I turn the ingested material into a structured intelligence layer the organisation can actually query.
This means three things. First, breaking the documents into meaningful units — a clause, a procedure step, a definition — and tagging them with the regulatory regime, business area, owner, and effective date. Second, identifying the contradictions and the gaps, and putting them in front of the people who can resolve them. Third, building the relationships between pieces of knowledge so the Brain understands that the lending policy refers to the affordability procedure which refers to the CCR rules which refer to the Central Bank's 2023 guidance.
This is the stage clients underestimate. It's also the stage that determines whether the whole deployment is worth anything. A well-structured Brain answers questions a human would take half a day to research. A badly structured one regurgitates documents.
Stage three — swarm: deploy specialist agents against real work
Once the knowledge is structured, I deploy a swarm of specialist agents on top of it. Not one big general-purpose chatbot — that's the design pattern that fails. A swarm of focused agents, each with a defined job, defined boundaries, and defined escalation paths.
For a credit union that might be a lending-policy agent, an AML-screening agent, a member-correspondence drafter, a board-pack assembler, and a regulatory-change monitor. Each one knows what it's allowed to do, what it has to escalate, and what evidence it has to produce when it acts.
The swarm pattern matters because it mirrors how the organisation actually works. Real businesses have specialists who collaborate, not generalists who do everything. The Brain should be the same. It also makes failure modes manageable — if the AML agent starts behaving oddly, you can isolate and retune it without tearing down the whole deployment.
Stage four — audit: every decision evidenced, every action logged
This is the stage that separates a regulated deployment from a toy. Every query, every answer, every action taken by every agent has to be logged, attributable, and reviewable. Not as an afterthought. As a core feature.
When the regulator asks why your firm took a particular decision on the third of November, you need to be able to show: the question that was asked, the documents the Brain drew on, the version of the policy that applied at the time, the agent that produced the answer, and the human who reviewed it. All of that has to come out as a clean evidence pack, not a screenshot from a chat window.
Build the audit layer first and bolt the agents on top of it. Try it the other way around and you'll be ripping the system apart in eighteen months when your first inspection lands.
Why the order matters
I get asked regularly whether the stages can run in parallel. The honest answer is no — not properly. You can't structure what you haven't ingested. You can't deploy a swarm against unstructured knowledge without producing nonsense. And you can't retrofit audit onto a system that wasn't designed for it.
The whole methodology takes between eight and sixteen weeks for most mid-sized firms. That's faster than the failed two-year projects I keep being called in to clean up, and the result actually works.
What to do next
If you want to see how the methodology applies to a specific sector — particularly credit unions, where the regulatory pressure makes this work urgent — have a look at the Intelligence Brain for credit unions page. If you'd rather see the product as a whole, the main Intelligence Brain overview covers the architecture, the on-premise model, and how to get started.
Or email me directly. I read everything that comes in.