AI-dentifying risks: ensuring trust: the new RICS standard

09 July 2025. Published by Katharine Cusack, Partner and Catherine Zakarias-Welch, Knowledge Lawyer and Emma Wherry, Of Counsel

On 4 March 2025, the RICS ran a public consultation on its new professional standard, "Responsible use of AI 1st Edition" which ran until 29 April 2025.  Members were asked to participate in order to help frame the way in which the RICS incorporates the use of AI in the industry.

As Andrew Knight, AI, data and tech lead at RICS points out, AI, although not new, has rapidly advanced since the first ChatGPT was first launched in November 2022.  It quickly popularised LLMs, which now permeate almost every aspect of our daily lives.  One of AI's main advantages is its ability to analyse an enormous amount of data which can be interrogated.  AI is aiding surveyors and valuers who are fast adopting tools which can, almost instantly, analyse market data for valuations (and even do this automatically); carry out sales comparisons; forecast trends; enhance risk management by comparing reports with RICS standards and highlighting inconsistencies; summarise information; draft reports; carry out remote surveys and detect defects with the use of drones; as well as a plethora more including training and time consuming administrative tasks.

We have recently seen cases in the courts where AI has been mistakenly used and courts have issued stark warnings on the importance on verifying AI work product to ensure accuracy or face regulatory scrutiny or even contempt proceedings.  The RICS wants to establish appropriate safeguards to ensure members, RICS regulated firms, maintain professional standards in their work with the increase in AI's use (stressing this is both knowingly and unknowingly used).  The profession is held to a high standard of quality work and this needs to be retained as and when new technologies are implemented.

In light of this, the new standard aims to:

• Ensure checks and balances when using AI

• Manage data to maintain public confidence

• Clearly communicate with clients

Accountability and knowledge

Whilst some of the standards were recommended best practice, the new standard contains mandatory obligations.  RICS was keen to highlight that the new standard simply served as a baseline and the RICS professional standards will be taken into account in regulatory, disciplinary or legal proceedings.  Set as a "baseline", members are expected to expand their knowledge of AI including use cases, risks and limitations.  With the known risks of bias and hallucinations, RICS members must balance these against the opportunities.

Practice Management

Privacy and Confidentiality

RICS members will need to consider how private and confidential information is stored and managed, in some cases this may include restricting access.  This is because when information is uploaded into an AI tool, the data is used to learn and is also available for anyone else accessing the AI tool.

Governance and Appropriateness

RICS members must only use an AI tool where it is the most appropriate tool for the task in hand considering the services it provides and its inherent risks.  Before its use, members must carry out a risk assessment considering the services it provides and any alternatives; sustainability given the increased energy usage these systems use; privacy and risks (by way of statement or policy).  To support this, members must also maintain a dated written register of the AI system(s) in use which impacts their services; its purpose; and review date.  Polices must also be implemented on procurement and responsible use as well as training guidance for those using the AI tools.

Risk Management

Members must record identified risks and how they are being handled.  A RAG (red, amber, green) risk register must be created and cover overarching risks (bias; erroneous outputs; limitation; and data retention); mitigation; and quarterly reviews.  Members could use PESTEL/SWOT tools in addition.

Scrutiny and Assurance

Members should engage scepticism when reviewing output from AI tools, scrutinising results using their professional judgment; skills; and experience.  Their decision on its reliability must then be recorded in writing taking into account the purpose; data used by the AI tool; algorithm limitations; and variables which could make an impact (eg market differences).  Any concern about reliability must be documented and affected stakeholders must be told in writing.  There is a carve out for high volume work, where members must carry out random sampling.  It is also a requirement for members to ensure adequate professional indemnity insurance in place for the use of AI.

Terms of engagement and communication

To maintain client trust, transparency is key.  Members must therefore provide clients with written information on its use of AI in its client relationship documents as to when and where AI has been used; PI cover; complaints process of its use; and redress.

On request, RICS members must provide written explanation about the AI tool used; its limitations; due diligence undertaken before its use; how risks are identified and managed; and reliability decisions.

Development

As well as integrating external third party AI tools into their working systems, RICS members may also be involved at an earlier, development, stage for their own in-house systems.  This extends the scope and accountability and members must apply the new standard.  A written record must also be maintained documenting the AI tool's application; risks; and any other potential approach.  In addition, members must produce a written sustainability impact assessment; include a diverse range of stakeholders; document compliance with data laws; obtain written permission if using personal data; and maintain adequate PI cover.

We await the results of the consultation.  In the meantime, it is clear that there will be an additional, necessary, burden placed on members to ensure risks are appropriately managed when using this new(ish) technology.  This will be both in terms of initial and ongoing time and cost investment.  This will include producing additional policies; updating client relationship documents; publishing reports including on sustainability and diversity; training; and obtaining adequate PI insurance.

 

Stay connected and subscribe to our latest insights and views 

Subscribe Here