Intelligence Brain · public-sector

Public-sector AI procurement in Ireland — what the brain delivers

← Back to Intelligence Brain

Procurement officers in Irish public bodies are being asked, with increasing frequency, to evaluate AI tools they were never trained to evaluate. The frameworks haven't caught up. The OGP templates assume software you install, host, and audit — not a model that talks to a US data centre and learns from your queries. Until the templates catch up, the burden lands on individual buyers in the HSE, Revenue, the Department of Social Protection, local authorities, and the semi-states. This article sets out, in concrete engineering terms, what a brain-style architecture delivers to that buyer, and what questions actually matter when you put an AI capability out to tender in Ireland.

Why standard SaaS AI clauses don't fit public bodies

The default contract you're handed by a hyperscaler or a SaaS AI vendor is built for a US-headquartered commercial buyer. It tells you the model runs in a region of the vendor's choosing, that prompts and outputs may be retained for "service improvement", and that sub-processors can be added with notice. For a private company in retail or media, that's usually fine. For a public body governed by Section 38 of the Data Protection Act, the NIS2 transposition, the EU AI Act's high-risk Annex III categories, and the Public Service ICT Strategy, it's a series of latent compliance failures.

The specific failures I keep seeing in tender responses:

  • Data residency described as "EU" but with fallback to US regions during incidents — meaning Schrems II analysis has to be repeated every time a region fails over.
  • "We don't train on your data" stated in marketing but contradicted, or silently caveated, in the DPA appendix.
  • No technical mechanism to prove a query never left the tenant boundary — only a contractual assertion.
  • Audit logs that capture API calls but not the model's intermediate reasoning, leaving Section 38 subject access requests impossible to fulfil completely.
  • Sub-processor lists that change without meaningful notice, and that include model providers who themselves use third-country compute.

None of this is the vendor being malicious. It's that the SaaS AI model and the Irish public-sector compliance model were designed for different worlds. Sovereign AI procurement is the bridge between them, and it has to be specified at the architecture level, not the contract level, because contractual promises don't survive a region failover at 3am.

What "on-premise" actually has to mean in 2025

The phrase "on-premise AI" is being abused. I've seen it applied to systems that run inference on a vendor's GPU cluster but expose a local API gateway. That's not on-premise. For Irish public-sector AI procurement, the test I'd apply is: if the internet cable to the building is cut, does the system still answer queries? If the answer is no, it's not on-premise — it's a hosted service with a polite façade.

A genuine on-premise organisational intelligence layer needs:

  • Inference on local hardware. The model weights sit on a machine in your data centre, your CoLo, or your departmental server room. GPU or CPU inference, depending on model size and latency targets.
  • Local embeddings and vector store. The index of your documents — the thing that actually contains sensitive content in mathematical form — never leaves the building. This is the part most "private AI" deployments quietly externalise.
  • Local audit and telemetry. Logs go to your SIEM, not the vendor's. If the vendor needs telemetry for support, it's anonymised and pull-based, not push-based.
  • Air-gappable updates. Model and software updates ship as signed bundles you can apply offline, with a rollback path. Not auto-update from a vendor CDN.

If a tender response can't tick those four boxes with technical detail, it isn't on-premise for the purposes you care about. It's a hosted product with on-premise marketing.

The five questions that should be in every Irish public AI tender

Most procurement teams I've talked to are working from generic ICT tender templates with an AI annex bolted on. The annex usually asks about bias, explainability, and "ethical use" — important, but not the questions that distinguish a defensible architecture from an indefensible one. Here are the five I'd add, and what a credible answer looks like.

1. Where, physically, do queries get processed?

Not "EU region". A street address, or at minimum a named data centre operator and country. If the vendor can't tell you that, they can't tell you whether a query crosses a border under load.

2. What is the data egress profile under normal and failure conditions?

Bytes per day leaving the tenant boundary, broken down by destination and purpose. A genuine on-premise system has near-zero egress except for licence checks and signed update pulls. If the answer involves "model API calls", you're looking at a hosted product.

3. How is the model itself updated, and who decides when?

For a regulated buyer, an unscheduled model swap is a behavioural change in a system you've already accepted. You need version pinning, change logs at the weight level, and the right to defer updates. "We continuously improve the model" is a red flag, not a feature.

4. What evidence can the system produce for a Section 38 access request?

If a citizen asks what the system has processed about them, can you actually answer? That requires per-query logging tied to identifiable inputs, retention controls aligned to your retention schedule, and the ability to extract and redact.

5. What happens at end of contract?

Specifically: the embeddings, the fine-tuned weights, the prompt history, the audit logs. A clean exit means destruction certificates and a verifiable wipe, not "we'll delete it within 90 days."

Architecture patterns that actually pass Irish public-sector review

The architecture I've ended up building for the public-sector intelligence brain reflects what survives an honest review by an Irish DPO, an internal audit team, and a security architect who's read the NIS2 transposition closely. It's not exotic. It's a pattern more vendors should adopt.

The core pattern: a single tenant-bounded appliance that contains the model, the embeddings, the vector index, the orchestration layer, and the audit log. It exposes a documented API to internal applications. It pulls signed updates from a hardened distribution endpoint on a schedule the buyer controls. It writes logs to the buyer's SIEM in a defined schema. It has no outbound dependency on a model provider's API for inference.

Around that core, three things matter for public-sector deployments specifically:

  • Role-based retrieval. The vector store enforces access at retrieval time, not at presentation time. A query from a junior officer can't surface documents they wouldn't be allowed to open in the underlying system. This is the failure mode in most "RAG over SharePoint" deployments — the LLM sees everything the indexer saw, regardless of the user's clearance.
  • Citation-bound answers. Every assertion the model makes is traceable to a specific source document and passage. No citation, no answer. This isn't an anti-hallucination measure dressed up — it's an evidentiary requirement when the output may inform a benefit decision, a planning determination, or an inspection.
  • Deterministic redaction at ingest. PPS numbers, named individuals in case files, and other identifiers are detected and tokenised before they ever reach the embedding model. The model works on tokens; only authorised retrieval paths can resolve them back.

Where the EU AI Act actually lands for Irish public buyers

The EU AI Act's high-risk categories under Annex III map directly onto a lot of public-sector AI use cases: access to public services, eligibility for benefits, law enforcement support, migration and border, administration of justice. If you're procuring AI for any of those, you're procuring a high-risk system, with the obligations that follow — risk management, data governance, technical documentation, logging, human oversight, accuracy and robustness testing, and post-market monitoring.

The practical implication for procurement: the documentation burden is on the deployer, which in most cases is the public body, not the vendor. You can't outsource the obligation by buying a SaaS product. You can, however, buy an architecture that gives you the artifacts you need — the logs, the documentation, the test results, the human-oversight mechanisms — rather than one that makes you reconstruct them.

This is where the choice of architecture becomes a compliance choice. A hosted model with opaque internals leaves you writing risk management documentation against a black box. A locally-deployed brain with documented retrieval, logging, and oversight gives you something to point at when the regulator asks how the system reached its conclusion.

What to do this week

If you're sitting on an AI tender draft right now, the single most useful thing you can do this week is rewrite the technical section around the five questions above and circulate it to one DPO, one security architect, and one frontline officer. The DPO will tell you whether the answers protect the citizen. The architect will tell you whether they're verifiable. The officer will tell you whether the resulting system is usable. If you'd like a worked example of an architecture that answers those questions affirmatively, the Intelligence Brain overview sets out the pattern in more detail. The Irish public sector doesn't need to import a US-shaped AI procurement model. It needs to specify, clearly and technically, what sovereign means — and then buy that.

Book a 30-minute assessment

Direct with Michael. No charge. No pitch deck.

Pick a slot →