Part 5 – AI Regulation Globally

Published on 10 June 2025

This is Part 5 of 'Regulation of AI'

There have been various initiatives for countries around the world to cooperate on AI regulation, including knowledge sharing and securing commitments from tech providers.

 

International agreements

 

Currently, there is only one legally-binding international treaty – the Council of Europe's convention on AI. This treaty, signed by the US, EU and UK on 5 September 2024, creates a common framework for AI systems with three over-arching safeguards:

 

  • protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them
  • protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined
  • protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely

 

On 30 October 2023 the G7 published its international guiding principles on AI, in addition to a voluntary code of conduct for AI developers. The G7 principles are a non-exhaustive list of guiding principles aimed at promoting safe, secure and trustworthy AI and are intended to build on the OECD's AI Principles, adopted back in May 2019.

 

Global summits

 

There have been three global AI summits. In November 2023, the UK Government hosted the first – titled the AI Safety Summit. The summit brought together representatives from governments, AI companies, research experts and civil society groups from across the globe, with the stated aims of considering the risk of AI and discussing how they can be mitigated through internationally co-ordinated action. One output from the UK's AI Safety Summit was the Bletchley Declaration focused on international collaboration on identifying AI safety risks and creating risk-based policies to address such risks. Another output was an agreement between senior government representatives from leading AI nations and major AI developers and organisations (including Meta, Google DeepMind and OpenAI) to a plan for safety testing of frontier AI models.

 

The second global AI summit was held in Seoul in May 2024. The institutes of 10 countries and the EU signed the Seoul Declaration with commitments to cooperate more between themselves and via organisations such as the UN, G7, G20 and OECD, while sixteen AI firms made voluntary safety commitments.

 

In February 2025, the AI Action Summit was held in Paris. Over 1000 participants from over 100 countries attended the summit which focused on the key themes of inclusive and environmentally-sustainable AI. The Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet was signed by 60 countries but not the US or UK.

 

Standards

 

AI-related standards have been published by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IE). In its response to the white paper, the UK government mentions specifically the importance of engaging with global standards development organisations such as the ISO and IEC. The most prominent AI ISO/IEC standards are:

  1. ISO/IEC TR 24028:2020 that analyses the factors that can impact the trustworthiness of systems providing or using AI
  2. ISO/IEC TR 24368:2022 on the ethical and societal concerns surrounding AI
  3. ISO/IEC 23894:2023 which offers strategic guidance to organisations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organisations can integrate risk management into their AI-driven activities and business functions
  4. ISO/IEC 42001:2023 which specifies requirements for establishing, implementing, maintaining, and continually improving AI management systems within organisations

 

In addition, relevant standards have also been published by: (i) the British Standards Institution including PD CEN/CLC TR 18145:2025 which provides guidance on sustainable AI technologies; and (ii) the Institute of Electrical and Electronic Engineers including IEEE 3119-2025 on Procurement of AI and Automated Decision Systems.

 

 

Stay connected and subscribe to our latest insights and views 

Subscribe Here