Most estate agents in Ireland are running their business on a CRM that was designed for selling cars, a shared inbox that nobody fully owns, and a WhatsApp thread where the actual deals get done. The data is there — viewings, valuations, BER certs, vendor calls, email chains, photos, floor plans, the daft.ie enquiries — it's just scattered across six places and three phones. An intelligence brain doesn't replace any of that. It sits underneath it, reads everything once, and gives you one place to ask "what's happening with 14 Oakwood Drive?" or "which vendors haven't heard from us in three weeks?" and get a real answer.
What "intelligence brain" actually means for an agency
I'll keep the jargon short. An intelligence brain is a private retrieval system. It ingests your documents, emails, listing data, and notes, indexes them in a vector database alongside a structured store, and lets you query the lot in natural language. The model that does the answering runs on hardware you control — either a server in the office or a private tenancy — and the data never leaves that boundary.
For an estate agent or auctioneer, that means the brain knows about every property you've ever listed, every vendor instruction letter, every BER report, every offer email, every viewing feedback note, and every Land Registry folio you've pulled. When a vendor rings on a Tuesday morning asking why their three-bed in Cahir hasn't moved, the negotiator doesn't need to dig through Outlook for fifteen minutes. They ask the brain, and it summarises the seven viewings, the two low offers, the feedback themes ("kitchen dated", "garden small"), and the comparable sales in the area from the PSRA register.
That's the whole pitch. It's not magic. It's good plumbing.
The data sources that matter in Irish property
An estate agent AI in Ireland lives or dies on what it can read. The useful sources, ranked roughly by how much signal they carry per kilobyte:
- Vendor and buyer email threads — the richest source. Tone, urgency, objections, solicitor delays, mortgage approval status. A good ingestion pipeline pulls from Microsoft 365 or Google Workspace via Graph API or Gmail API with delegated access, not IMAP scraping.
- Listing platform feeds — daft.ie and myhome.ie enquiry exports, plus your portal's own analytics. Click-through, saved-search appearances, enquiry-to-viewing conversion.
- CRM records — Reapit, Acquaint, Alto, or whatever you use. Property records, contact records, viewing logs, offer ledgers.
- Document store — BER certs, floor plans, vendor instruction forms, AML documents, Property Services Regulatory Authority compliance files. PDFs, mostly. Some scanned, some native.
- The PSRA Residential Property Price Register — public, free, and the closest thing Ireland has to a sold-price comparable database. Joining your own listings to PSRA records by Eircode and date gives you a private market view nobody else has.
- Eircode and Ordnance Survey data — for accurate geocoding, boundary checks, and proximity queries.
- Folio and Land Registry extracts — if you do auctioneer work, the title pack matters. These are scanned PDFs and need OCR.
The mistake most teams make is trying to ingest all of this on day one. Don't. Start with email and the CRM. Those two cover about eighty percent of the questions you actually want to ask.
Retrieval architecture, in plain terms
Here's how the irish property AI stack I build looks under the bonnet. There are five layers and they each do one job.
Ingestion. A scheduled worker pulls from each source on its own cadence — emails every five minutes, CRM nightly, PSRA weekly. Each item gets a stable ID, a source tag, a timestamp, and a tenant ID (so multi-office firms stay separated). Documents get OCR'd through Tesseract or a hosted equivalent, then chunked.
Chunking and embedding. This is where most off-the-shelf RAG systems fall apart for property. A vendor instruction letter is not the same shape as an email thread, which is not the same shape as a BER report. I chunk by semantic boundary — paragraph, email message, table row — not by fixed token count. Each chunk gets embedded with a model running locally (bge-large or e5-mistral, depending on hardware) and stored in a vector database. I tend to use pgvector inside Postgres because it keeps the structured data and the embeddings in one place, and the SQL is honest.
Structured layer. Alongside the vector store, the brain maintains a normalised schema: property, vendor, buyer, viewing, offer, communication. Every chunk references back to a row in this schema. That's how the brain answers "list all vendors who haven't been contacted in three weeks" — that's a SQL query, not a vector search. A lot of "AI" products are very bad at this distinction.
Query orchestration. When a user asks a question, a router decides whether it's a structured query, a semantic query, or both. "What's the average sale price in Clonmel since January?" is structured. "What did the Murphys say about the kitchen?" is semantic. "Show me all properties in Tipperary where the vendor sounded unhappy in the last email" is both, and the router runs the SQL filter first, then the vector search inside that subset.
Generation. The retrieved chunks plus the question go to a language model — for most agencies a 70B-class open-weight model running on a single GPU box is plenty. The model writes the answer, citing the source chunks. Every answer in the UI shows you the underlying email or document, so the negotiator can verify before they ring the vendor back. No citation, no answer. That rule alone stops about ninety percent of hallucination problems.
What it actually does on a Monday morning
Concrete examples from how I see the auctioneer AI working in practice:
- The morning brief. At 8am the brain emails each negotiator a list of their properties with status changes since Friday — new offers, viewing feedback, vendor complaints, mortgage approvals received, solicitor delays. Two minutes of reading instead of forty minutes of inbox triage.
- The valuation pack. When you take a new instruction, the brain assembles a comparables pack from PSRA, your own historic sales in the area, current competing listings, and time-on-market trends. The valuer adjusts and signs off. The grunt work is done.
- The vendor update. Once a fortnight every active vendor should hear from you. The brain drafts the update — viewings, feedback themes, market context, recommendation — and the negotiator edits and sends. Vendors who feel informed don't go to the agency down the road.
- Compliance trail. The PSRA wants evidence of AML checks, Letters of Engagement, and proper record-keeping. The brain knows what's missing on each file before the auditor does.
- Listing copy. A property listing AI that actually works writes the daft.ie description from the floor plan, BER, vendor's own description, and the negotiator's viewing notes — in your firm's voice, not generic estate-agent boilerplate. The negotiator approves or edits. The point isn't to remove the human, it's to remove the blank page.
The on-premise question, and why it matters here
Estate agents in Ireland hold a lot of personal data. Buyer financial circumstances, vendor reasons for selling (divorce, bereavement, debt), AML documents, PPS numbers in some cases. Under GDPR you are the data controller and you are responsible for where that data goes. If you pipe it through a US-based AI API, you have a transfer problem, a processor problem, and a vendor lock-in problem all at once.
Running the model locally — on a server in your office, or in a private Irish or EU tenancy — solves all three at the source. The hardware is not as expensive as people think. A single workstation-class GPU box will handle a small-to-mid agency comfortably. For larger firms, two boxes in failover. The capex pays back fast against per-seat AI subscriptions, and you keep control of your own data. I've written more about how this looks for property firms specifically at the intelligence brain for property, and the broader architecture across other regulated verticals at the intelligence brain overview.
Where it goes wrong, and how to avoid it
Three failure modes I see repeatedly:
Treating it as a chatbot. If the only interface is a chat window, adoption dies in three weeks. The brain has to push — morning briefs, alerts, draft updates — not just sit there waiting to be asked.
Skipping the structured layer. Pure RAG on emails alone will give you plausible-sounding nonsense for any question that requires counting, filtering, or sorting. You need the SQL bones.
No human review on outbound text. Anything the brain writes that goes to a vendor, buyer, or solicitor goes through a negotiator first. Always. The brain drafts, the human sends. That's the contract.
Where to start this week
Pick one office, one negotiator, and one workflow — the vendor fortnightly update is the best candidate because it's painful, repetitive, and unambiguously valuable. Get the brain reading email and the CRM for that one workflow. Run it for a month with the negotiator reviewing every draft. Measure the time saved and the vendor response rate. Then expand. Don't try to boil the ocean on