A new era for surveyors: RICS launches global standard on responsible use of AI

17 September 2025. Published by Katharine Cusack, Partner and Cecilia Everett, Partner

The Royal Institution of Chartered Surveyors (RICS) has taken a decisive step into the future, publishing its first global professional standard for the responsible use of artificial intelligence (AI) in surveying. Released on 10 September 2025 and due to take effect from 9 March 2026, this landmark guidance aims to steer surveyors through the rapidly evolving landscape of AI technologies, seeking to balance innovation with accountability.

Why a new standard?

AI is no longer a distant concept; it’s now embedded in the day-to-day work of surveyors, from analysing market data and drafting reports to conducting remote surveys and automating administrative tasks. While the benefits are clear, so too are the risks: erroneous outputs, bias, privacy concerns, and the potential for regulatory scrutiny if things go wrong. The new RICS standard recognises these opportunities and challenges, setting out a framework to help members manage risks and maintain public trust.

Aims of the standard

At its core, the standard is about ensuring that AI is used responsibly and transparently, by:

  • Upskilling the profession, so that surveyors understand the technology they’re using
  • Setting a baseline for practice management, to minimise harm from AI systems
  • Supporting informed decisions on AI procurement and reliance on outputs
  • Promoting clear communication with clients and stakeholders
  • Providing a framework for the responsible development of AI systems

Who does it apply to?

The standard applies wherever AI outputs have a material impact on the delivery of surveying services – think AI-generated document summaries, recommendations, or opinions that influence investigative decisions. Surveyors and firms must assess and document whether their use of AI meets this threshold, and comply with both RICS standards and relevant local laws.

Main requirements and practical points

Training and knowledge

Surveyors and firms are expected to develop and maintain a basic understanding of AI systems, their limitations, and risks. This includes awareness of the risk of bias, erroneous outputs, and data usage issues. The standard acknowledges that knowledge across the profession is uneven, so upskilling is essential.

Practice management

  • Data governance: Firms must ensure secure storage of and restricted access to private data, only uploading confidential information to AI systems with explicit consent and after risk assessment.
  • System governance: Each AI system’s appropriateness must be assessed and documented, with a register of systems maintained. Policies should clarify roles, training, and oversight.
  • Risk management: A regularly updated risk register is required, documenting key AI-related risks, mitigation plans, and status, with quarterly reviews.
    Using AI in practice
  • Procurement and due diligence: Before adopting any AI system, firms must conduct thorough due diligence, considering environmental impact, legal compliance, data permissions, risks of bias, and supplier liability. Where information is lacking, risks must be documented.
  • Reliability of outputs: Professional judgement is key. Surveyors must record assumptions, concerns, and any reliability assessments in writing. If outputs are unreliable, clients must be notified.
  • Quality assurance: For automated or high-volume outputs, regular "dip-sampling" (randomly selecting and reviewing a subset of outputs) is required to assure quality.
  • Client communication: Transparency is paramount. Clients must be informed in advance and in writing about when and how AI will be used, with engagement documents specifying processes, indemnity cover, redress mechanisms, and opt-out options.
  • Explainability: On request, firms must provide written information about the AI systems used, due diligence undertaken, risk management, and reliability decisions.

Developing AI systems

Surveyors involved in developing AI systems have extended responsibilities. This includes:

  • Documenting the application, risks, and alternative approaches
  • Conducting a sustainability impact assessment
  • Involving a diverse range of stakeholders
  • Ensuring compliance with data protection laws and obtaining written permission for personal data use

Accountability and impact

The new standard moves beyond best practice, introducing mandatory obligations. RICS members and regulated firms are expected to maintain robust documentation, upskill staff, and embed responsible AI use into their daily operations. The standard will be taken into account in regulatory, disciplinary, or legal proceedings, making compliance more than just a box-ticking exercise.

What does this mean in practice?

Surveyors will need to invest time and resources into updating policies, training staff, maintaining registers and risk assessments, and revising client documents. 

Ultimately, the standard aims to ensure that as AI becomes more prevalent, the profession retains its high standards of quality, transparency, and client trust. The additional burden is clear, but so too is the goal of RICS's guidance: by embracing responsible AI use, surveyors can harness its benefits, while safeguarding against its risks.

Stay connected and subscribe to our latest insights and views 

Subscribe Here