Most school and university AI policies I've read are written as if the question is "should students use AI?" That ship sailed. The real question is whether you can prove, six months later, what a specific student did with AI on a specific piece of work — and whether your institution acted reasonably in response. If you can't answer that, you don't have a policy. You have a position statement.
I've spent the last while talking to principals, registrars, and academic-integrity officers across Ireland and the UK about this. The pattern is consistent: a policy document exists, it gets updated each summer, and nobody can actually demonstrate compliance because there's no record of anything. This article is about fixing that — building a student AI policy on an audit-trail foundation rather than a rules-and-hope foundation.
Why prohibition-based policies have already failed
The first wave of school AI policy was prohibition. Don't use it. If you do, that's misconduct. The problem with prohibition is enforcement: detection tools don't work reliably, false positives damage students, and prohibition pushes legitimate use cases — spell-checking, brainstorming, accessibility support — into the shadows alongside the cheating.
The second wave was permission with conditions. Use AI for these tasks but not those. Cite it when you do. Declare it. This is closer to right, but it relies entirely on student self-reporting in a context where there's a clear incentive not to self-report. A student AI policy that depends on the honour system in 2025 isn't a policy, it's a wish.
The third wave — the one that actually works — treats AI use the same way we treat library access, lab equipment, or networked computers in a school: a managed resource with logged usage. You don't ban students from using the library. You don't audit every book they take out. But you do know who borrowed what, and if there's a question later, you can answer it. That's the model.
What an audit-trail policy actually requires
Before you write a single line of policy text, you need to be honest about what infrastructure exists. An audit-trail approach requires four things, and if you don't have them, the policy can't be enforced no matter how well-drafted it is.
- A sanctioned AI surface. A specific tool or set of tools that students are expected to use for AI-assisted work. Could be a hosted service with institutional SSO, could be an on-premise system. The point is: when a student uses AI, it should normally be through this surface, not through whatever they have on their phone.
- Per-student authentication. Sessions tied to identity, not shared accounts or anonymous access. This is the bit most schools skip and then wonder why their logs are useless.
- Retention with defined scope. Prompts, responses, timestamps, and the assignment context retained for a specific period — long enough to cover the academic-integrity window, short enough to be defensible under data-protection rules.
- An access procedure. A documented process for who can pull a student's AI history, under what circumstances, and with what oversight. Without this, you've built a surveillance tool, not an academic-integrity tool.
If those four pieces are in place, the policy text becomes much shorter and much more defensible. If they're not, the policy is aspirational.
The structure of the policy document itself
I'd argue a workable university AI policy or school AI policy has six sections, in this order. The order matters because it's the order a reasonable reader — student, parent, regulator, court — will want to follow.
1. What AI use is expected, permitted, and prohibited
Be specific. "AI is permitted for brainstorming and outline generation in coursework, but the final written submission must be the student's own composition" is enforceable. "Use AI responsibly" is not. List the categories of work and the categories of use. If you teach programming, say what's permitted in a coding assignment versus a written exam.
2. Which tools are sanctioned
Name them. If a student uses something else for assessed work, that's a policy breach in itself, separate from any question about the content of the work. This is the bit that makes the audit trail work — because the institution controls the sanctioned surface, the institution has the logs.
3. Disclosure requirements
What must students declare on submitted work? A footnote, a separate declaration, a prompt log? Pick one and stick to it. The disclosure is partly about transparency and partly about creating a contemporaneous record that can be checked against the audit trail.
4. The audit trail itself
This is the section most policies don't have. State plainly: the institution retains records of student interactions with sanctioned AI tools. State the retention period. State who has access. State the circumstances under which access happens. This is also where data-protection notices belong — students need to know this in advance, not discover it during a misconduct hearing.
5. The investigation procedure
When there's a suspected breach, what happens? Who looks at the audit trail? What's the standard of proof? What's the appeals route? An audit trail without a documented procedure for using it is worse than no audit trail, because it invites inconsistent application.
6. Sanctions and proportionality
A first-year student who used AI on a low-stakes assignment without disclosing it is not the same as a final-year student who submitted a fabricated dissertation. The policy needs a graduated response, and the graduation needs to be visible in the document.
The Irish context — data protection and the Department
For an Irish education AI policy specifically, two constraints shape the design. The first is GDPR. Retaining prompts and responses tied to an identified student is processing personal data, and you need a lawful basis. Legitimate interest works for academic integrity, but only if you've done the balancing test and documented it. Consent is shaky because there's a clear power imbalance.
The second is the direction of travel from the Department of Education and the higher-education regulators. Nothing prescriptive has landed yet that mandates a specific approach, but the questions being asked of institutions are increasingly about evidence: how do you know, how can you show, what records do you keep. A policy that anticipates those questions will age better than one that doesn't.
The on-premise question matters here too. If the sanctioned AI surface sends every student prompt to a third-country provider, you've added a transfer-impact assessment to an already complex picture. This is one of the reasons the Intelligence Brain for education sits inside the institution's own infrastructure — the audit trail stays where the data subjects are, which simplifies the legal analysis considerably.
The technical shape of the audit trail
For the engineers reading this — and there should be at least one in any institution implementing this seriously — here's what the log structure should capture per interaction:
- Session identity: authenticated student ID, not just an IP or device.
- Timestamp: with timezone, machine-readable.
- Context tag: module code, assignment ID, or "general" — populated either by the student at session start or by integration with the VLE.
- Full prompt content: verbatim, including any uploaded files referenced.
- Full response content: verbatim.
- Model and version: so behaviour can be reproduced or explained later.
- Hash chain or write-once storage: so the integrity of the log itself is defensible if challenged.
The hash-chain piece is the bit institutions skip and regret. If a student's defence is "those logs have been tampered with," you need a cryptographic answer, not a "trust us." It's not difficult to implement — every record includes the hash of the previous record, and you publish or escrow the most recent hash periodically — but it has to be designed in, not bolted on.
Access to the logs themselves should be a separate, logged event. The person who pulls a student's AI history for an investigation creates a record of having done so, and that record is itself auditable. This is the pattern we use in the broader Intelligence Brain architecture — every read is an event, not just every write.
What this changes for teaching staff
The policy isn't only a discipline tool. The same audit trail that supports academic-integrity investigations is genuinely useful for teaching. A lecturer can see, at the cohort level, what students are struggling with — what they're asking AI to explain, where the conceptual gaps are. That's pedagogically valuable in a way detection tools never were.
It also changes the conversation with students. When AI use is logged but permitted within defined limits, the discussion shifts from "did you cheat" to "show me how you used this and what you learned." That's a much healthier place for academic integrity to live, and it's only available once the audit trail exists.
Where to start this term
Don't try to write the perfect policy first. Start with one programme or one year group. Pick a sanctioned AI surface — even a basic one — and stand up authentication and logging behind it. Run it for a term. See what the logs actually look like, what disclosure students actually make, what cases actually arise. Then write the policy from evidence, not from imagination. The institutions getting this right are the ones treating it as an infrastructure problem first and a policy problem second. Reverse that order and you'll be rewriting the document every August forever.