Comparable analysis is the heart of valuation work, and it's also the part most likely to eat your week. Pulling sales evidence, reconciling addresses against folio numbers, checking planning history, normalising for condition and location adjustments, then writing it up in a defensible format — that's a full day's work for a single instruction in many Irish firms. The Intelligence Brain doesn't replace the valuer's judgement on any of this. It removes the search-and-collate friction so the valuer spends time on the parts that actually need a human: the adjustments, the reasoning, and the signature on the report.
Why comparable analysis breaks at scale
Most Irish valuation practices I've spoken to run their evidence library on a mix of PSR exports, agent's brochures saved as PDFs, internal spreadsheets going back ten or fifteen years, and the personal memory of two or three senior valuers. That works fine on a quiet week. It falls apart when you've got six instructions in, a bank panel deadline, and the senior valuer is on annual leave.
The structural problem is that comparable evidence isn't really structured data. The Property Services Regulatory Authority register gives you address, date, and price. It doesn't give you condition, BER beyond what's published, internal layout, garden orientation, frontage, or whether the kitchen was refitted in 2019. That detail lives in brochures, photographs, EPC certificates, planning files, and the valuer's site notes — all unstructured, all in different formats, all stored differently across the firm.
So when you're building a comparable schedule, you're not running a database query. You're running a research project. Six to eight comparables, each requiring you to chase down five or six different sources, on every instruction. That's where the time goes.
What an intelligence layer actually does for a valuer
The job of an intelligence layer for property work is to sit on top of the firm's existing evidence — every brochure, every prior valuation report, every site note, every PSR export, every planning search the practice has done — and make it queryable in the way a valuer actually thinks.
The questions a valuer asks aren't SQL questions. They're things like: "What have we seen in the Old Bridge area in the last eighteen months for three-bed semis with extensions?" or "Show me everything we've valued within 800 metres of this folio in the last two years, and flag anything where we noted subsidence or pyrite concerns." Or, more pointedly: "What did Tom write about this exact estate in 2022?"
The intelligence brain handles those questions because it indexes the firm's documents semantically — it understands that "three-bed semi with extension" matches a brochure that says "extended three bedroom semi-detached" or a prior report that describes "a semi-detached residence comprising three bedrooms with single-storey rear extension." That fuzzy matching is the part traditional databases never solved for valuers.
The technical architecture, briefly
The system runs on the firm's own hardware. That matters for property work in Ireland for two reasons: client confidentiality on prior valuations, and the fact that comparable evidence is a competitive asset. You don't want your firm's twenty years of accumulated knowledge sitting on a US cloud platform that any of your competitors could in theory subscribe to.
The architecture has four layers worth understanding:
- Ingestion. The system pulls in PDFs, Word documents, scanned site notes, photographs with EXIF data, spreadsheets, and structured exports from the PSR register and other sources the firm subscribes to. OCR runs over scanned material. Photographs get tagged for content where useful.
- Indexing. Documents are chunked and embedded into a vector store. Metadata — date, address, instruction reference, valuer initials, property type — is preserved and queryable alongside the semantic index.
- Retrieval. When a valuer asks a question, the system pulls the most relevant chunks across the entire evidence base, weighted by recency, geographic proximity, and property type match.
- Generation. A local language model assembles the answer, with citations back to the source documents. Every claim in the answer points to a specific page in a specific file. No citation, no claim.
The citation discipline is the part that matters most. A valuer cannot put a number on a report based on something the AI said. They need to see the brochure, read the prior report, check the PSR entry. The system's job is to find those sources fast, not to invent answers.
Comparable analysis in practice
Here's what a typical comparable workflow looks like once the intelligence layer is in place. The valuer enters the subject property — address, type, approximate floor area, key features. The system returns a ranked list of candidate comparables drawn from the firm's own evidence base and any structured external sources it's connected to.
For each candidate, the valuer sees: source documents (brochure, PSR record, prior valuation if the firm has handled the property before), distance from subject, transaction date, headline price, and any flags the system has surfaced — for example, if a prior valuation by the firm noted defects, or if the property was sold below market due to a probate sale flagged in the agent's notes.
The valuer then does the work only a valuer can do: deciding which comparables are genuinely comparable, what adjustments to make for condition, location, and timing, and how much weight to put on each. The system drafts the comparable schedule in the firm's standard format, with all citations preserved, and the valuer edits it.
What used to take the bulk of a day takes a couple of hours, and — this is the more important point — it takes those couple of hours consistently, even when the senior valuer who knew every estate in the county is on holiday or has retired.
What it doesn't do, and why that matters
The intelligence brain does not produce a valuation figure. It does not adjust for condition. It does not decide whether a 1970s bungalow with replacement windows is more comparable to a 1980s bungalow in original condition or to a renovated 1960s bungalow two streets away. Those are professional judgements, and they sit with the valuer who's signing the report and carrying the PI insurance.
This is deliberate. Irish valuation practice is regulated under the Property Services (Regulation) Act and increasingly under the Society of Chartered Surveyors Ireland's standards, which assume a named professional taking responsibility for the figure. An AI that produced valuations would be both professionally improper and commercially useless — no bank panel would accept it, and no PI insurer would cover it.
What the system does is what should be automated: the search, the collation, the formatting, the citation tracking, and the consistency checks. The valuer keeps the judgement work, which is also the work that justifies the fee.
Practical considerations for Irish valuation firms
A few things I'd flag for any firm considering this kind of system. First, the quality of the output is bounded by the quality of the evidence base. If your firm has been throwing brochures in a shared drive without consistent naming for fifteen years, the system can still index them — but the first six months will involve some cleanup as inconsistencies surface.
Second, on-premise means on-premise. The hardware sits in the firm's office. There's no monthly cloud subscription that quietly grows. There's a one-off capital cost and ongoing maintenance, much like the firm's accounting system or its case management software. For most regional Irish firms, the economics of an on-premise intelligence system work out better than the recurring cost of cloud-based AI tools, particularly once you factor in the data sovereignty point.
Third, adoption is a training question, not a technology question. The valuers who get most out of these systems are the ones who learn to ask better questions. That's a habit that takes a few weeks to develop and is worth investing in deliberately rather than assuming it'll happen organically.
Where to start this week
If you run a valuation practice and you're curious whether this would work for your firm, the most useful thing you can do this week is an audit of your evidence base. How many prior valuation reports do you have, and where are they? How many brochures? Are they searchable, or are they scanned PDFs nobody can grep? Who in the firm holds the institutional knowledge that isn't written down anywhere? That audit tells you both the size of the prize and the size of the cleanup, and it's the conversation I'd want to have before recommending anything technical. The technology is the easy part. Knowing what you've got is the foundation everything else rests on.