Accountability will be more important than technological ambition in the next phase of artificial intelligence (AI) in real estate.

Jeff Blaylock is head of client, UK at Deepki
AI has often been positioned over the past two years as a silver bullet for some of the biggest challenges in real estate. Examples where its promise appears compelling include helping sustainability strategies and unlocking value across portfolios. But as adoption of AI has increased, can we really trust it to drive meaningful action?
In an industry where decisions translate directly into concrete actions – investments, retrofits and asset revaluations – being supplied with approximate answers is not just unhelpful; it is an actual business risk.
There is a distinction between generic and domain-trained AI, and it is critical. Most mainstream AI systems are trained on open data, which works for writing text but fails when applied to the specific ‘messy’ realities of buildings. The breakthrough is not AI generating fluent answers, but its ability to extract and enrich fragmented information into something that is decision-ready.
We need to focus on high-impact use cases where AI helps improve human decisions. Real estate data such as energy audits, technical reports and sensor feeds rarely align neatly. Domain-trained AI can ingest these disparate formats to recommend specific retrofit pathways, such as heat pump transitions or lighting upgrades, while also calculating the long-term financial impact and level of carbon reduction. This is automated retrofit ROI modelling.
Another area is portfolio-wide risk identification. AI is able to analyse risk across entire portfolios in seconds rather than weeks. For instance, it can cross-reference Energy Performance Certificates against shifting regulatory requirements to identify which assets are at risk of becoming ‘stranded’ before they hit a valuation cliff.
Meanwhile, we are moving beyond dashboards that describe what has already happened toward tools that recommend what to do next. Instead of just flagging a spike in energy use, AI can analyse maintenance logs and invoices to suggest precise operational tweaks, such as recalibrating building management systems or replacing a faulty sensor, grounded in real asset data.
But the usefulness of AI for carrying out these tasks only becomes really meaningful when three conditions are met: operational trust, data quality and domain expertise.
Trust is not a philosophical concept; it is operational.
So-called black-box recommendations, where a tool proposes something but its reasoning is hard to fathom, erode confidence when the user questions results. Asset owners need to understand where an insight comes from so it can stand up to scrutiny. AI also does not correct bad data, but amplifies it. Training models on incomplete or unverified inputs leads to ‘hallucinations’ – confident outputs that are flawed or completely wrong. In a sector involving millions of pounds, this is not acceptable.
And real estate is governed by physical constraints and financial imperatives. AI systems not supervised by industry experts will struggle to capture this complex reality. Domain-trained models – those that are validated by practitioners – can reflect how decisions are actually made on the ground.
The next phase of AI in real estate is about accountability. AI is not a shortcut and does not replace human judgement. Buildings are still transformed by engineers and investors making informed choices. However, what AI can do is empower those people with clearer insight and greater confidence to act.
As the industry recognises sustainability as a core business topic, the tools supporting those decisions must be held to the same standard as the decisions themselves. The goal is turning complexity into action and ensuring innovation creates real-world impact rather than noise.
Jeff Blaylock is head of client, UK at Deepki