Insights

Seven legal-AI workflows that pay for themselves — and the auditor that signs every output

2026-05-02 · Michael English · Clonmel, Co. Tipperary

Most partners I talk to have already bought a legal AI tool. It sits in the corner of the practice, used by two associates and ignored by the rest. The bill arrives every quarter and nobody can quite say what it earned. That is not an AI problem. It is a workflow problem. A law firm is a series of well-defined production lines — discovery, review, drafting, filing, advising — and intelligence only pays back when you wire it into a specific line, with a specific input, a specific output, and a human who signs the result. Below are the seven workflows I would build first if I were standing up an AI capability inside a mid-sized firm this quarter, and the one model that has to sit above all of them.

1. Discovery — the first place the meter turns over

Discovery is where the maths is most obvious. A junior reads a box of documents at human speed. A model reads the same box in minutes and tags every page by relevance, privilege, and entity. The firms I have seen do this well do not try to replace the junior. They give the junior a pre-sorted pile, a confidence score on every tag, and a queue of edge cases that the model flagged as uncertain.

The trap here is the sales pitch that says "upload everything, get answers." That is not how discovery works in court. You need a chain of custody on every document, a log of every prompt, and a way to show the bench how a decision was reached. Build the pipeline so each document carries its own audit trail from ingest to production. The model is not the lawyer. The model is the paralegal that never sleeps and never loses a page.

2. Contract review — the volume play

Contract review AI is the workflow most firms try first, and most firms get half-right. They run an NDA through a model, the model returns a redline, the associate accepts the redline, the partner signs. That works for a stack of NDAs. It falls over the moment the contract is bespoke.

The fix is to stop thinking of contract review as a single task. It is at least four: clause extraction, deviation from playbook, risk classification, and counter-proposal drafting. Each one wants a different prompt, a different reference set, and a different reviewer. A model that flags a non-standard indemnity clause is doing extraction. A model that proposes new language is doing drafting. Mixing them produces confident nonsense. Separate them and the same model that looked unreliable last month becomes the most-used tool in the firm.

3. Due diligence — where the partner actually feels the time back

Due diligence AI earns its keep on transactions because the work is broad, shallow, and time-boxed. A target has hundreds of contracts, a data room of corporate records, a stack of employment agreements, and three weeks until signing. No human team reads it all. They sample. The model does not sample.

What a good due diligence pipeline gives you is a structured summary per document — counterparty, term, change-of-control, assignment, governing law, anything unusual — and a heat map across the data room of where the risk is concentrated. The partner stops asking "did anyone read the leases?" and starts asking "why is one lease in Cork drafted so differently from the other twenty-three?" That is the question they were paid to ask in the first place. The model gets them to it on day two instead of day fifteen.

4. Citation checking — the workflow that pays for itself in embarrassment avoided

This one is small in hours and large in consequence. Every brief, every memo, every opinion contains citations. Some are wrong. Some are out of date. A few, in the worst cases the press has already enjoyed reporting, do not exist at all because a generative model hallucinated them.

A citation-checking workflow sits at the end of every document and does three things: confirms the case exists, confirms the quote matches the source, and confirms the case has not been overruled or distinguished in a way that undermines the argument. It is unglamorous. It is the cheapest insurance policy a firm will ever buy. Build it once, run it on every output, and the partner who has to sign the brief sleeps better.

5. First-pass drafting — the one with the warning label

Drafting is where the temptation is highest and the discipline must be strictest. A model can produce a credible first draft of a memo, a motion, a letter of advice, in seconds. It will sound right. It will read right. It will sometimes be wrong in ways that take a trained eye to catch.

The rule I would write into the firm's policy on day one: no AI-drafted text leaves the building without a named human author who has read every line. The model produces a first pass. The human is still the lawyer. If you cannot enforce that, do not deploy drafting AI at all. If you can, the productivity gain is real, because most legal writing is structural — the recitation of facts, the statement of the law, the application — and a model is very good at structure. It is the judgment in the application step that the human owns.

6. Knowledge retrieval — the firm's own brain, finally usable

Every firm I have worked with has the same complaint. "We did this deal before. I know we did. I cannot find the precedent." Twenty years of memos, opinions, contracts, and research sit on a shared drive that nobody can search properly. The keyword search returns three thousand documents. Nobody opens them.

A retrieval-augmented system trained on the firm's own back catalogue solves a problem that has been quietly costing firms money for two decades. An associate asks a question in natural language. The system returns the three most relevant prior matters, the partner who ran each, and the specific clauses or arguments that were used. The work of the firm becomes available to the firm. That is the workflow that, once it is in place, nobody will let you take away.

7. The auditor — the model that signs every output

This is the one that ties the whole thing together, and the one most firms skip. A legal swarm — and that is the right word for it once you have six or seven models running in parallel across these workflows — produces a lot of output. Some of it will be wrong. Some of it will contradict other parts of it. Some of it will look fine and be subtly off.

You need a model whose only job is to read the output of the other models. Not to do the work. To check the work. The auditor reads the contract review and asks: does this redline match the playbook? Reads the citation check and asks: did the verifier actually verify, or did it lazy-pass? Reads the first-pass draft and asks: does this draft contradict the due diligence summary it was based on?

The auditor is not a human replacement either. It is the last machine layer before the human reviewer. Its job is to make sure the human is reviewing a coherent product, not seven disconnected ones. In every AI system I have built — at IMPT, in the architecture we are putting around the booking agent, in everything I learnt across two decades at Tesco, Dunnes Stores, and Oracle — the auditor model is the difference between a demo and a production system. Skip it and you will spend more time fixing AI mistakes than you saved by deploying AI in the first place.

Sequencing matters

If you are standing this up from scratch, do not try to do all seven at once. Start with citation checking, because it is small and the failure mode is loud. Add knowledge retrieval next, because it produces an immediate quality-of-life gain that the whole firm feels. Then contract review, then due diligence, then discovery, and only then drafting. The auditor goes in alongside the second workflow, not after the seventh. The longer you run a swarm without an auditor, the harder it becomes to retrofit one.

What to do this week

Pick one workflow. Citation checking is the cheapest place to start and the easiest to measure. Run it for a fortnight on every brief that goes out the door. Count the errors it catches. That number is your business case for the next six. At IMPT we are building the same pattern into our own operations — a swarm of narrow agents under a single auditor that signs the output before it reaches a human — because the lesson generalises beyond law. The firms that will pull ahead over the next two years are not the ones with the best model. They are the ones who decided, workflow by workflow, where the model belongs and where the lawyer still has to sign.

Reading these regularly?

Subscribe to Letters from Clonmel — quarterly long-form founder letters from Mike. First letter Q2 2026.

Subscribe →