Intelligence Brain · medical

The intelligence brain for Irish GP practices

← Back to Intelligence Brain

Most Irish GP practices I talk to are running Socrates or HealthOne, drowning in repeat prescription requests, chasing GMS claims, and trying to get a Friday afternoon back. The promise of "AI in healthcare" usually arrives as a SaaS pitch that wants your patient data shipped to a US cloud and a contract that nobody on the practice side has the legal budget to negotiate. That's not the answer for a four-doctor surgery in Clonmel or a single-hander in Mayo. The answer is a small, on-premise intelligence layer that sits beside your existing PMS, reads what it's allowed to read, and does the boring work that's eating your day.

This is what I've been building. Below is the engineering view — what an Intelligence Brain actually is in a GP context, where it plugs in, what it does on day one, and the regulatory shape it has to fit.

What "intelligence brain" means in a GP setting

The Intelligence Brain is a self-hosted appliance — a small server box, or a ring-fenced VM on your existing practice server — that runs a local language model, a vector index of your practice's own documents, and a set of agents that talk to your PMS through whatever interface it exposes. It doesn't replace Socrates or HealthOne. It sits next to them.

The important architectural decisions, in plain terms:

  • Local inference. The model runs on the box. Patient identifiable information never leaves the practice network unless you explicitly route a non-PII task to a hosted model.
  • Read-mostly by default. The Brain reads from the PMS, the scanner inbox, Healthlink messages, and the practice's own protocols folder. Write-back is gated — every action that changes a record is queued for a human click.
  • Append-only audit log. Every prompt, every retrieval, every action is logged with a hash chain. If the HSE or the DPC ever asks what the system did at 14:32 last Tuesday, you can answer.
  • No training on patient data. Retrieval-augmented generation, not fine-tuning. The model weights are frozen. Your data informs answers at query time and is not absorbed into the model.

That last point matters more than people realise. Most "AI for healthcare" contracts contain a clause that lets the vendor use de-identified data for model improvement. De-identification of free-text GP notes is genuinely hard — surnames appear in consultation notes, addresses get typed into the wrong field, family histories name relatives. The safe answer is to not send the data anywhere in the first place.

Where it plugs into a typical Irish practice

A four-GP practice in Ireland usually has a stack that looks like this: a PMS (Socrates or HealthOne, occasionally CompleteGP), Healthmail for secure email, Healthlink for hospital correspondence, a scanner that drops PDFs into a shared folder, a card-payment terminal, and a website with an online booking form. There may be a separate phone system. There is almost always a Windows server in a cupboard.

The Brain integrates at four points:

  • The scanner inbox. Hospital letters, lab results, and consultant correspondence land as PDFs. The Brain OCRs them, classifies them (discharge summary, OPD letter, radiology report, lab), extracts the patient identifier, and proposes a filing action against the matching PMS record. The GP clicks accept.
  • Healthlink and Healthmail. Same idea — structured and semi-structured messages get parsed, summarised, and queued for review. A two-page discharge summary becomes a three-line summary plus the original PDF, attached to the right chart.
  • The repeat prescription queue. The Brain reads the request, pulls the patient's current medication list and last review date from the PMS, checks against the practice's own repeat policy, and either flags it as routine-approve or routes it to the GP with a one-line reason ("last review 14 months ago, BP not recorded since").
  • The phone and reception. Optionally — a triage assistant that helps reception staff classify a call against the practice's own protocols. Not clinical decision-making. Just "this matches your urgent-same-day list, route accordingly."

Each of these is a separate agent with its own scope, its own audit trail, and its own kill switch. You can turn off the prescription agent without affecting the scanner agent. You should be able to turn any of them off in one click.

The GMS claim and admin layer

The part nobody wants to talk about is the GMS claim. STC items, special items, chronic disease management programme submissions, vaccination claims — all of it has to be coded correctly and submitted on time. Practice managers spend hours on this every month, and missed claims are real money walking out the door.

This is exactly the kind of work a local agent does well. The Brain reads the day's consultations from the PMS (where permitted by the interface), identifies billable items based on the practice's own claim rules, and produces a draft claim batch for the practice manager to review. It doesn't submit anything. It doesn't decide what's billable. It surfaces what looks billable, with a one-line justification and a link back to the consultation note, and the human decides.

The same pattern applies to the Chronic Disease Management programme. The Brain knows which patients are enrolled, when their last review was, what's outstanding (HbA1c, BP, ACR, foot check, retinal screening), and produces a weekly recall list. It doesn't send the recalls — your practice manager does, through your existing system. It does the searching and sorting that currently eats Tuesday morning.

What the regulatory shape actually requires

If you're a data controller under GDPR — which every GP practice is — and you're processing special category data (health data is Article 9), the bar for any AI system is higher than for ordinary business software. The questions you have to be able to answer include:

  • Where is the processing happening, physically?
  • Who has access to the data, including vendor support staff?
  • What is the legal basis for each processing activity?
  • How long is data retained, and where?
  • What's the DPIA, and who signed it off?
  • If there's an AI-assisted decision, is there meaningful human review?

An on-premise architecture answers most of these by construction. The processing happens on your box. Vendor staff don't have access unless you grant a remote session. Retention is whatever you configure on the box. Human review is the default because every action is queued for a click. The DPIA is shorter to write because the data flow is shorter.

The EU AI Act layers on top. Most GP-practice uses I've described — admin, claim drafting, document filing — are not high-risk under the Act. Anything that looks like clinical decision support is. The Brain is deliberately scoped to stay on the admin side of that line. If a future module crosses into clinical territory, it gets a different conformity path, different documentation, and a different switch.

You can read more about the medical-vertical configuration on the Intelligence Brain for medical practices page, including the specific agents and the DPIA template I provide with the appliance.

What it looks like on the GP's screen

None of this matters if the GP has to learn a new system. The interface I've settled on is a single side panel that lives next to the PMS — not inside it, because the PMS vendors won't let you in there, but pinned to the side of the screen. The panel shows three things:

  • Inbox. Items waiting for review — incoming letters classified and ready to file, prescription requests with a recommended action, claims drafted for sign-off.
  • Ask. A free-text box where the GP or practice manager can ask a question about the practice's own data. "Show me all CDM patients with no review in the last nine months." "What's the recall list for the flu campaign?" "Find the consultant letter from St James's about Mrs Murphy in the last two years." Answers come with citations to the source documents.
  • Audit. A read-only view of what the Brain has done today, this week, this month. Click any line to see the prompt, the retrieval, the action, and who approved it.

That's the whole interface. There's no chat persona, no avatar, no animations. It's a tool. It does work. The work is reviewed and approved by a human. The next morning, the queue is shorter.

Where to start this week

If you're a GP partner or practice manager and any of this lands, the cheapest first step is not to buy anything. Spend an afternoon writing down the five admin tasks that consume the most time in your practice — the actual minutes, not the perceived ones. Map each one to where the data lives (PMS, scanner folder, Healthmail, paper) and who currently does it. That document is worth more than any vendor demo, because it tells you which agent to deploy first and what success looks like in week one. Then have a conversation — with me, with your IT supplier, with anyone who'll talk on-premise rather than cloud-by-default — about what a small, audited, local intelligence brain would actually do against that list. The answer is usually shorter and more boring than the marketing suggests, which is exactly the point.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →