UK’s new AI Cyber Security Code of Practice

Published on 09 May 2025

The question

How is the UK Government seeking to protect AI systems from growing cyber security threats, in particular in respect of deployable AI systems using GenAI?

The key takeaway

The Government has now established and published a voluntary Code of Practice for the Cyber Security of AI (the Code). The Code aims to protect the end-user of AI models and tools, outlining the steps businesses should take to cover the entire AI supply chain and gain a competitive edge in the AI marketplace.

The background

As recently discussed in our Autumn 2024 Snapshot, the Government has been working on updating the proposed voluntary AI Cyber Security Code of Practice, first published in November 2023, following feedback gathered from the call for views on the cyber security of AI in the UK on 15 May 2024.

The scope of the Code is focused on deployable AI Systems (rather than those developed for research purposes) that incorporate deep neural networks (such as GenAI). The rationale behind the development of the Code is that there are distinct differences between “AI” and “Software”, and that the Code seeks to plug the gap left by the advancement of AI and the new cybersecurity challenges that developers are faced with.

On 31 January 2025, the UK Government published the official policy paper for the Code of Practice for the Cyber Security of AI. The primary objective of the Code is to “protect AI systems from growing cyber security threats”.

The development

Now that the Government has had the opportunity to review and absorb related feedback from stakeholders, the finalised Code has been formally introduced. The Code establishes 13 core principles covering the AI supply chain and infrastructure and focuses on five groups of stakeholders (as set out below). The associated implementation guide also gives organisations practical guidance for adopting the core principles and how to integrate them into existing frameworks.

The key stakeholders include:

  • developers: businesses and individuals responsible for creating or adapting an AI model and/or system
  • system operators: businesses responsible for embedding or deploying an AI model and system within their infrastructure
  • data custodians: “any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function
  • end-users: “any employee within an organisation or business and UK consumers who use an AI model and system for any purpose, including to support their work and day-to-day activities
  • affected entities: “all individuals and technologies, such as apps and autonomous systems, that are not directly affected by AI systems or decisions based on the output of AI systems”.

    The Code separates the principles into five phases of use of an AI tool, to reflect the AI lifecycle, including:

  • secure design
  • secure development
  • secure deployment
  • secure maintenance, and
  • secure end of life.

    Key principles outlined in the Code include:

  • designing systems for security as well as functionality and performance
  • evaluating the threats and managing risks to AI systems
  • enabling human responsibility for AI systems
  • securing the infrastructure and supply chain
  • communication and processes associated with end-users and affected entities
  • maintaining regular security updates, patches and mitigations for AI models and systems
  • monitoring the system’s behaviour, and
  • ensuring proper data and model disposal.

Each principle makes it clear which stakeholder, as outlined above, will be primarily responsible for implementation.

Why is this important?

The Code seeks to encourage the safe development and deployment of AI tools. By adhering to the Code, AI developers will be able to build a strong reputation with UK consumers and differentiate themselves from competitors through their commitment to the safe and secure development of AI.

The implementation of the Code aligns with the global trend of strengthening cybersecurity legislation, such as the Cyber Security and Resilience Bill announced in the UK, and the EU AI Act. This is primarily due to the increased risk of cyber threats and identified supply chain vulnerabilities. The focus on security aims to help promote the UK as a leader in the AI marketplace.

It is also worth noting that the Government has developed the Code with the hope that it will form the basis of a new standard for secure AI that will eventually become adopted globally.

Any practical tips?

The Code provides a useful reference point for businesses aiming to gain a competitive edge over competitors in the AI marketplace. Further, the UK Government will be monitoring the application of the Code and working with stakeholders to determine if further AI regulation will be required in the future. Whilst adoption of the Code is voluntary, it follows that businesses who utilise the principles of the Code when developing an AI model or tool will likely be a step ahead in terms of future AI regulatory compliance, particularly if the Code forms part of a wider European or global standard.

Spring 2025

Stay connected and subscribe to our latest insights and views 

Subscribe Here