Navigating the AI Act: The GPAI Code of Practice and Guidelines
The question
What does the European Commission’s GPAI Code of Practice and Guidelines mean for providers of General Purpose AI models?
The key takeaway
The EU AI Act imposes obligations on General Purpose AI (GPAI) model providers from August 2025, with GPAI models that create systemic risk facing heightened scrutiny. To support compliance, the European Commission (Commission) has published detailed guidelines (Guidelines) and a voluntary GPAI Code of Practice (Code). GPAI providers who sign up to, and comply with, the voluntary Code will be deemed to comply with the GPAI-related obligations under the AI Act.
The background
The AI Act entered into force on 1 August 2024, establishing harmonised rules for the development, deployment and placing on the market of AI models across the EU. Recognising the broad applicability and potential impact of GPAI models, the AI Act imposes specific obligations on their providers, including requirements around transparency, safety, security and copyright. The obligations for providers of GPAI models entered into force on 2 August 2025, with the Commission’s enforcement powers commencing on 2 August 2026.
To support compliance with the GPAI obligations, the AI Act included requirements on the AI Office to facilitate the production of codes of practice. The AI Office sits within the Commission and regulates AI at an EU level.
The development
On 10 July 2025, the Commission published the Code, a voluntary tool which allows providers to demonstrate compliance with the AI Act. According to the Commission, this will reduce the administrative burden faced by providers and provide more legal certainty than would be the case if compliance was demonstrated through other means. The Code is structured around three core chapters: Transparency, Safety and Security, and Copyright.
- Transparency: This chapter sets out the measures that signatories to the Code commit to implementing to comply with the transparency obligations of the AI Act. For example, providers are required to draw up and keep up to date documentation for each GPAI model placed on the market, including a template Model Documentation Form which sets out how and what model information should be presented. Similarly, the Code sets out the information that must be provided to the AI Office and national competent authorities (on request), and to downstream providers to enable them to understand the capabilities and limitations of the GPAI model and to comply with their obligations under the AI Act.
- Copyright: Providers are obliged to implement a copyright policy that ensures compliance with EU copyright and related intellectual property rights. This chapter also sets out that providers must take steps to ensure that only lawfully accessible content is used for text and data mining or model training, including respecting technological measures such as the Robot Exclusion Protocol (robots.txt) and rights reservations expressed by rightsholders. The Code requires providers to put in place technical safeguards to prevent models from generating outputs that infringe copyright, and to prohibit infringing uses in their terms and conditions or documentation. Mechanisms must also be established to allow rightsholders to lodge complaints about non-compliance, and providers are expected to act on such complaints diligently and within a reasonable time.
- Safety and Security: This chapter of the Code is only applicable to GPAI models with systemic risk and establishes a framework for the continuous assessment and mitigation of those systemic risks. For example, under the Code providers commit to adopting a state-of-the-art Safety and Security Framework that sets out processes and measures to be implemented to ensure a model's systemic risks are acceptable. Providers are expected to implement robust risk governance structures, including assigning responsibility for risk management to specific committees or individuals within their organisation. The Code also outlines best practices for model evaluation, post-market monitoring, and incident reporting, ensuring that providers can identify, assess and respond to emerging risks throughout the model’s lifecycle. Technical measures are also required to safeguard model integrity and cybersecurity, including protections against insider threats and unauthorised access to model parameters.
Many of the leading platforms have signed up to the Code, or signalled an intention to do so, including Google, Microsoft, OpenAI and Amazon. Meta has said it won't sign the Code on the basis that it introduces legal uncertainties for model developers and measures which go beyond the scope of the AI Act, with Joel Kaplan (Meta's head of global affairs) calling it overreach.
The Code is complemented by the Commission's Guidelines which clarify the scope and practical implementation of the obligations for providers of GPAI models under the AI Act. Notably, the guidelines confirm that:
- the threshold for determining when a model is considered a GPAI model is based on the level of computational resources used to train the model (measured in floating point operations or FLOP). A GPAI model is therefore one which (a) is trained with greater than 10²³ FLOP; (b) has the ability to generate language, images, or video; (c) is capable of performing a wide range of distinct tasks; and (d) displays significant generality;
- GPAI models trained with more than 10²⁵ FLOP (or those specifically designated by the Commission) are presumed to present systemic risk and are therefore subject to additional obligations under the AI Act. The Commission can also designate certain GPAI models as having systemic risk;
- a downstream actor who modifies a GPAI model will be considered the provider of a new GPAI model (and consequently inherits corresponding obligations) if the modification leads to a significant change in the model’s generality, capabilities or systemic risk. A "significant change" will have been made if the modification uses more than one-third of the original model’s training compute;
- for GPAI models released under a free and open-source licence, certain transparency obligations are relaxed, provided the model’s parameters and usage information are publicly available and no monetisation occurs.
- the Guidelines encourage early engagement with the AI Office, particularly for providers of models with systemic risk or those planning to place new models on the market after August 2025.
Why is this important?
The regulatory landscape for AI in the EU is rapidly evolving, and providers of GPAI models face complex compliance challenges. The Commission’s Guidelines seek to clarify key thresholds and provide guidance on obligations throughout the model lifecycle. The Code looks to complement this by offering a practical framework for demonstrating compliance.
The key question, however, is whether the Code overreaches against the actual requirements of the AI Act. As Joel Kaplan at Meta has warned, compliance with the Code and the Guidelines could suppress the development and deployment of advanced AI models in Europe. This could signal European businesses being put at yet another disadvantage to their global competitors, especially in the US, and highlights once more the growing divide between US tech firms and the European regulators in the critical area of AI development.
Any practical tips?
Providers first need to assess whether their models meet the GPAI threshold using the FLOP-based criteria in the Guidelines, as this determines the scope of obligations. On this note, early engagement with the AI Office is advisable, particularly for systemic risk models or those launching after August 2025. Signing up to the Code can streamline compliance from an EU regulatory perspective, especially when paired with clear documentation and robust governance. Other areas to watch include monitoring significant model modifications, reviewing open-source exemptions and ensuring that copyright safeguards and transparency measures remain aligned with both the Code and the Guidelines. Downstream deployers should monitor their own modifications to GPAI models to identify at which point they may be deemed to be providers themselves.
It follows that businesses integrating or developing GPAI models should review these requirements closely and assess how far they can comply, as this may well help avoid disruption and reputational risk as enforcement ramps up in the EU. Equally, as Meta has signalled, each provider needs to make its own assessment as to where the line should be drawn between hard regulation (the AI Act) and the (voluntary) Code and Guidelines – and at what point this hinders development in the highly competitive and fast-developing AI market.
Stay connected and subscribe to our latest insights and views
Subscribe Here