Most GP practices and clinics I talk to in Ireland want to use AI. They want letter drafting, triage support, summarisation of long histories, faster coding for billing. What stops them isn't curiosity — it's that nobody has given them a clear answer on whether feeding patient data into ChatGPT, Copilot, or Gemini is lawful under the Data Protection Act 2018 and GDPR. The honest answer is that for most consumer AI services, it isn't. But there's a defensible path, and it's narrower and more technical than the vendor decks suggest.
What the Data Protection Act 2018 actually requires of medical AI
The Irish Data Protection Act 2018 sits on top of GDPR. For health data, the relevant provisions are Article 9 GDPR (special category data) and sections 36 and 52 of the 2018 Act, which set out suitable and specific measures for processing health information. The Act doesn't ban AI. It demands four things in practice: a lawful basis under Article 6, a special-category condition under Article 9, suitable and specific measures under section 36, and — for anything that materially changes the risk profile — a Data Protection Impact Assessment under Article 35.
For a GP using a cloud LLM to draft referral letters, the lawful basis is usually the provision of healthcare (Article 9(2)(h)) combined with a contract or legitimate-interest basis for the administrative tail. That sounds straightforward until you ask: where is the data being processed, who is the controller, who is the processor, is there an Article 28 contract, is there an international transfer, and what is the retention behaviour of the model provider? Those five questions kill most off-the-shelf deployments.
The Act's "suitable and specific measures" are not abstract. The DPC has been explicit that they include access logging, role-based access control, encryption at rest and in transit, pseudonymisation where feasible, and clear policies on staff use. An AI system that can't produce an audit trail of which clinician asked it what about which patient is not compliant with section 36, full stop.
Why consumer AI fails the test, and what changes when you go on-premise
The default ChatGPT, Claude, and Gemini consumer tiers are not lawful destinations for identifiable patient data. The reasons are concrete: data leaves the EEA, the provider is not bound as an Article 28 processor for clinical purposes, training-data reuse is either ambiguous or opt-out rather than contractually prohibited, and the practice cannot produce a meaningful record of processing activities for what flows through them. Even the "enterprise" tiers improve some of this but rarely all of it, and the international transfer question for a US-headquartered provider remains live after Schrems II despite the EU-US Data Privacy Framework.
The picture changes substantially when the model runs on hardware you control. If a Llama, Mistral, or Qwen model is running on a server in your premises or in an EU-resident private cloud, several risks collapse: there's no international transfer, no third-party training reuse, no shared inference infrastructure, and the audit trail is yours to define. You still need the DPIA, you still need the access controls, you still need the staff policy — but you've removed the structural problems that no contract can paper over.
This is the architecture I built the Intelligence Brain for medical practices around. The model and the patient data sit on the same physical machine in the practice. Nothing leaves the premises unless a clinician explicitly exports it.
The DPIA you actually have to write
A DPIA for medical AI is not a tickbox exercise and the DPC will not be impressed by a generic template. The sections that matter for GP data AI deployments are these.
Necessity and proportionality. You have to articulate why an AI system, rather than existing tooling, is necessary for the clinical or administrative purpose. "It saves time" is not sufficient. "It reduces letter-drafting time so clinicians can see more patients, with human review of every output before it leaves the system" is closer.
Data flows. Map every input and output. For each, state: source, destination, whether it's identifiable, pseudonymised or anonymised, retention period, who can access it, and whether logs of access exist. If you can't draw this on one page, you don't yet understand your system well enough to deploy it.
Risks to data subjects. Be specific. Risks include hallucinated clinical content reaching a patient record without review, leakage of one patient's data into another patient's session via context contamination, model outputs being used as decision support without appropriate human oversight (an Article 22 concern), and unauthorised staff access to AI logs containing sensitive history.
Mitigations. For each risk, a concrete control. Hallucination is mitigated by mandatory human sign-off and clear UI cues that the output is draft. Cross-patient contamination is mitigated by per-session context isolation and not using long-running conversation memory across patients. Article 22 is mitigated by ensuring no automated decision is taken without a clinician in the loop.
Practical architecture for an on-premise medical AI system
The deployment pattern that holds up under audit looks like this. A single server — typically with one or two GPUs sized to the chosen model — sits inside the practice network, behind the practice firewall. It runs an inference engine (vLLM, llama.cpp, or Ollama in smaller cases) hosting an open-weights model. Clinical applications talk to it over the local network using a token-authenticated API. Every request and response is logged with the requesting user, timestamp, and a hash of the content.
Patient data is never sent to the public internet for inference. The model weights themselves are downloaded once, verified by checksum, and then the machine can be air-gapped or restricted to outbound updates only. Backups of logs and configuration go to encrypted storage; backups never include identifiable patient content extracted from the AI logs unless there is a specific clinical-record retention requirement, in which case those logs become part of the medical record and inherit its retention rules.
The practical issues are less glamorous than the architecture diagram. You need someone who can patch Linux. You need UPS and proper cooling. You need a documented procedure for what happens when the GPU fails on a Friday afternoon. These are the same operational problems any practice already has with its server hosting its practice management system, which is why most practices can absorb this if it's set up properly the first time.
Where Article 22, automated decision-making, and clinical judgment intersect
Article 22 GDPR restricts solely automated decisions with legal or similarly significant effects. Medical decisions clearly qualify. The widespread misreading is that this bans AI in clinical settings. It doesn't. It requires that the decision not be solely automated — which means a clinician must exercise meaningful judgment, not rubber-stamp.
"Meaningful" is the operative word. If a system produces a draft prescription and the workflow is structured so the clinician clicks "approve" without realistic capacity to review, that is solely automated processing dressed in human clothing. The system design matters as much as the policy. Outputs should require active clinician input — editing, signing, justifying — not passive acceptance. This is true regardless of how good the model is.
Subject access, deletion, and the awkward question of model memory
Patients have the right to access their data and, in some circumstances, to deletion. For traditional records this is a database query. For AI systems it gets harder. If a patient's data appeared in prompts and responses logged on your AI server, those logs are within scope of an SAR. You need to be able to find them.
If — and this is important — you fine-tuned a model on patient data, those weights potentially contain that data, and Article 17 deletion is technically very difficult. The defensible answer is: don't fine-tune on identifiable patient data. Use retrieval-augmented generation against your existing record store, where deletion of the source record removes the data from the AI's reach immediately. Keep the model itself general. This is the single most important architectural decision for compliance, and it's the one most often gotten wrong.
The same principle informs how I think about organisational intelligence systems generally: separate the model from the data, and keep the data under the same controls it was always under.
Where to start this week
If you run a practice and you're being pitched AI tools, do three things this week. First, write down every place patient data currently leaves your premises, including any AI tool already in informal use by staff — you can't secure what you can't see. Second, ask any AI vendor for their Article 28 processor agreement, their data residency commitment in writing, and their position on training data reuse; if they hesitate on any of the three, that's your answer. Third, before any deployment, draft the DPIA yourself or with someone who has actually written one for the DPC — not a template, the real thing. The compliance work is front-loaded. Once it's done properly, the system runs quietly for years. Skip it, and you'll be doing it under pressure during an audit instead.