Compliance is the architecture, not the paperwork
Most AI deployments treat compliance as a document trail produced after the fact — a DPIA written to justify a decision already taken, a legitimate interest assessment retrofitted to cover a vendor procurement that closed last quarter. That approach worked, more or less, when the technology was a search box or a recommendation engine. It does not work for systems that ingest organisational knowledge, infer relationships between people and decisions, and produce outputs that influence how regulated firms behave.
I built the Intelligence Brain as an on-premise system because the compliance posture has to start at the architecture, not at the policy library. If your model weights, prompts, and retrieved documents never leave your tenancy, a large class of GDPR and EU AI Act questions answer themselves. The remaining questions — and there are still plenty — get easier to document, defend, and revise.
DPIA for AI systems: what actually goes in it
A Data Protection Impact Assessment for an AI system is not a re-skinned vendor questionnaire. Under Article 35 GDPR and the Irish Data Protection Commission's guidance, a DPIA for a system that profiles, infers, or systematically monitors needs to describe the processing in operational terms — not marketing terms.
For an Intelligence Brain deployment, the DPIA I help customers produce typically covers:
- Data flow: what gets ingested, from which source systems, under which lawful basis, and where the embeddings and indices physically reside
- Inference surface: what the model can output, what it cannot output, and what guardrails are enforced at retrieval and generation time
- Human oversight: who reviews outputs, on what cadence, with what authority to override
- Retention and deletion: how a subject access request or erasure request actually propagates through vector stores and audit logs — not just the source database
- Residual risk: what remains after mitigations, and who in the firm has signed off on accepting it
The last point is the one most DPIAs fudge. A DPIA that lists no residual risk is a DPIA nobody believes.
Legitimate interest assessments for inference workloads
Where consent is not the right basis — and for most internal-knowledge use cases it is not — legitimate interest under Article 6(1)(f) is usually where firms land. The three-part test (purpose, necessity, balancing) is well understood in principle and badly applied in practice.
The balancing test is where AI workloads create new pressure. A staff member's expectation when they wrote a board paper in 2019 was probably not "this will be retrievable by an inference system in 2026". You cannot wave that away. What you can do is:
- Restrict the corpus by classification and age, with explicit carve-outs for HR, legal-privileged, and personal correspondence
- Document the retrieval scope so it matches the LIA, and enforce it in the system rather than the policy
- Provide a meaningful opt-out route for individuals whose content is in scope, and make sure the route actually works
- Re-run the LIA when the corpus, the model, or the user population materially changes
An LIA that sits in a SharePoint folder and never gets revisited is a liability, not a control.
EU AI Act tier-mapping: where your use case actually sits
The AI Act came into force in August 2024 with staged application. The prohibited-practice rules applied from February 2025, the general-purpose model obligations from August 2025, and the high-risk system obligations apply from August 2026. Irish firms who think they have time should look at their procurement cycle and reconsider.
The first job is honest tier-mapping. Most internal-knowledge and decision-support deployments are not high-risk under Annex III, but several edge cases pull them in:
- Anything used in employment decisions — recruitment screening, performance evaluation, task allocation — is high-risk
- Anything used in creditworthiness or insurance pricing for natural persons is high-risk
- Anything used in access to essential services, including some financial services, is high-risk
- Anything that influences administration of justice or democratic processes is high-risk
If you are in scope, the obligations are substantial: risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness, conformity assessment. None of these are unreasonable; all of them require evidence the system was built to produce that evidence, not retrofitted to claim it.
Irish AI compliance: the regulators you actually deal with
In Ireland, the practical regulatory surface for an AI deployment is broader than the AI Act alone. The Data Protection Commission remains the lead authority for GDPR matters. The Central Bank of Ireland's operational resilience and outsourcing expectations apply to regulated financial firms regardless of whether the workload is "AI". DORA applies from January 2025 for in-scope firms and pulls third-party ICT risk — including model providers — into formal register requirements.
The national competent authority structure for the AI Act in Ireland is still bedding in. What is not in doubt is that the DPC will continue to be the authority most firms hear from first, because most enforceable harms in AI deployments are data protection harms expressed through a model.
What to do next
If you are scoping an Intelligence Brain deployment and the compliance picture is what is holding you up, the right next step depends on where you sit. If you want the broader product picture — architecture, deployment model, what runs where — go to the Intelligence Brain overview. If you are in financial services specifically and want to see how this maps to Central Bank and DORA expectations, the financial services landing covers that ground in detail.
Either way: do not start with the policy. Start with the architecture. The policy is easier to write when the system was built to be defensible.