Intelligence Brain · accounting

The bookkeeping side of the intelligence brain — what it does and doesn't

← Back to Intelligence Brain

Most of the bookkeeping AI demos I've seen this year fall into two buckets: a slick OCR layer bolted onto a SaaS platform, or a chatbot that pretends to understand a chart of accounts. Neither is what an Irish accounting practice actually needs. What a practice needs is something that reads the messy stuff — the supplier invoice with two VAT rates and a delivery charge, the bank line that says "STRIPE PAYOUT REF 8842", the WhatsApp photo of a fuel receipt — and turns it into clean, posted entries with an audit trail you'd be happy to show Revenue. That's the part of the Intelligence Brain I want to talk about honestly, including what it doesn't do.

What "bookkeeping" actually means inside the Brain

Bookkeeping isn't one task. It's a stack of decisions that compound. The Brain treats it as a pipeline with five stages: capture, classification, coding, reconciliation, and posting. Each stage has its own model, its own confidence threshold, and its own escape hatch for human review. I deliberately didn't build it as a single end-to-end model that goes from "photo of receipt" to "posted journal" because when that fails — and it does fail — you have no idea where it failed.

Capture is the document-in step. PDFs from email, photos from a phone, CSVs from a bank, XML from Revenue's ROS, EDI from larger suppliers. Classification answers "what is this thing?" — invoice, credit note, statement, receipt, remittance advice, contract, something else. Coding is the accounting decision: which nominal, which VAT rate, which cost centre, which project. Reconciliation matches the document to a bank line or a supplier statement. Posting writes the journal to the underlying ledger — Sage, Xero, Surf, BrightBooks, whatever the practice uses.

The reason this matters: a generic LLM can do classification reasonably well out of the box. It cannot do coding for an Irish practice without being told what your nominal structure looks like, how you treat reverse-charge VAT on intra-EU services, and which clients are on cash receipts basis. That's local context, and local context is where most "AI bookkeeping" pitches fall down.

The capture layer: why OCR alone is not enough

Pure OCR gives you text. Text is not data. A typical Irish supplier invoice has the supplier name in three places (header, footer, VAT registration block), the invoice date in two formats, line items with mixed VAT rates, and a total that may or may not reconcile to the sum of lines once you account for rounding and a delivery charge that's VAT-inclusive on some templates and exclusive on others.

The capture layer in the Brain runs OCR first, then a layout model that understands invoice geometry — where blocks sit relative to each other — then a structured extraction step that pulls fields with a confidence score per field, not per document. The reason for per-field confidence is practical: the supplier name might be 99% certain, the VAT number 97%, but the line item breakdown 71%. You want to auto-post the header and flag only the lines for review. A document-level confidence score throws away that information.

For Irish-specific work, the capture layer is trained to recognise VAT registration number formats (IE plus seven digits and one or two letters), Revenue-issued document types, and the common formats of Irish utility, telco, and fuel suppliers. It also handles bilingual invoices and the fact that some suppliers still issue invoices with VAT shown only in the footer total.

The coding decision: where local context lives

This is the stage that actually saves a bookkeeper time, and it's also the stage that most off-the-shelf tools get wrong. Coding is "this invoice from MyEnergi for €342.18 — does it go to motor expenses, plant and machinery, or capital additions?" The answer depends on the client, their trade, their previous treatment of similar invoices, and sometimes a conversation with the partner.

The Brain handles coding with what I call a practice-local rules layer sitting on top of a learned model. The rules layer is editable by the bookkeeper — it's literally a list of conditions and outcomes, version-controlled, with a human-readable diff. The learned model proposes a coding; the rules layer can override or constrain it; the bookkeeper sees both. Over time, when a bookkeeper consistently overrides a coding for a particular client and supplier combination, the Brain proposes a new rule rather than silently changing its model. That distinction matters for audit.

I'm explicit about this with practices I talk to: the model does not learn in production without supervision. Silent learning is how you end up with drift you can't explain. Every behavioural change is a proposed rule that a human accepts or rejects. You can read more about how this works across verticals on the accounting Intelligence Brain page.

Reconciliation: the part everyone underestimates

Bank reconciliation is where bookkeeping AI either earns its keep or becomes a glorified expense-claim app. The hard cases aren't the obvious ones — supplier X invoice, supplier X payment, same amount, same week. The hard cases are:

  • One bank line that pays three invoices, two from one supplier and one from another, because someone consolidated the payment
  • A Stripe or SumUp payout that nets fees against gross sales and arrives two days after the underlying transactions
  • A direct debit that gets reversed and re-presented under a different reference
  • A foreign currency payment where the bank's FX rate doesn't match the invoice's stated rate, leaving a small gain or loss to post
  • A payment made personally by the director that needs to land in the director's loan account, not as a missing supplier payment

The Brain's reconciliation engine is a constraint solver, not a similarity-matching model. It treats reconciliation as: given a set of unreconciled bank lines and a set of unreconciled documents, find the assignment that minimises unexplained value while respecting hard constraints (dates, signs, currency). The output is a set of proposed matches with a confidence and a reason, plus a list of items it genuinely cannot explain. That last list is the one the bookkeeper actually wants — the unknowns, surfaced clearly, rather than buried in a "review" queue alongside obvious matches.

What the Brain does not do

This is the part most vendors skip. Honest list:

It does not file your VAT returns. It prepares the figures, shows the workings, and exports a Revenue-ready file. A human pushes the button. I'm not building a system that auto-files statutory returns, because the liability for getting it wrong sits with the practice and the client, not with the software.

It does not produce statutory accounts. Year-end work — adjustments, accruals, prepayments, depreciation policy decisions, related-party disclosures — is not bookkeeping. The Brain feeds a clean trial balance into whatever final-accounts tool the practice uses. It doesn't pretend to do the accountant's job.

It does not give tax advice. If a client asks "should I register for VAT?" through a portal, the Brain will not answer. It will route the query to the responsible person and prepare a summary of the client's turnover position so the human can answer faster.

It does not replace the bookkeeper. I've watched this assumption break practices. The Brain removes about the bottom layer of repetitive work — the data entry, the matching, the chasing. What's left is more judgement-heavy, not less. Practices that have planned for that have done well. Practices that bought AI hoping to cut headcount have generally been disappointed, because the work that remains is the work that needed a human all along.

Deployment: on-premise, on your terms

Every Brain deployment for an accounting practice runs on hardware the practice controls — either in their office or in a colocation facility they've chosen. Client financial data does not leave the practice's network. Models run locally. Document storage is local. The audit log is local and immutable.

This isn't a marketing position, it's a regulatory one. Irish practices are subject to GDPR, anti-money-laundering obligations, and professional body rules around client confidentiality. Sending client books to a US-hosted SaaS for processing is, at best, a paperwork exercise in data processing agreements and, at worst, a breach waiting to happen. On-premise removes the question. The general architecture is described on the Intelligence Brain overview.

The practical consequence is that updates ship as signed packages the practice installs, not as silent server-side changes. You always know which version of the model is running on your data, and you can roll back.

Where to start this week

If you run an Irish practice and you're curious whether this fits, don't start with a procurement process. Start with a measurement exercise. For one week, ask your bookkeepers to log the time they spend on three things: document capture and entry, bank reconciliation, and chasing clients for missing paperwork. Get real numbers, per client, per task. That gives you a baseline. Then we can have a useful conversation about which parts of the pipeline are worth automating first, and which parts to leave alone. The practices that get value from this don't buy a platform — they replace one bottleneck at a time, measure it, and move to the next.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →