Part 2 - AI regulation in the EU

Published on 04 October 2024

This is Part 2 of 'Regulation of AI

The EU AI Act, the main elements of which are covered in our previous article and our podcast, entered into force on 1 August 2024. Most provisions apply from August 2026 but some apply earlier, or later:

  • February 2025 – chapter 1 general provisions such as on AI literacy apply and prohibited AI practices are banned
  • August 2025 – obligations for general purpose AI models (GPAI) apply as do rules on penalties (except for GPAI models)
  • August 2026 – GPAI model penalties for providers apply 
  • August 2027 – rules for AI systems embedded into regulated products apply

The intention is to achieve proportionality by setting the level of regulation according to the potential risk the AI can generate to health, safety, fundamental rights or the environment. AI systems with an unacceptable level of risk to people’s safety, for example AI systems that manipulate human behaviour to circumvent their free will, are prohibited. High-risk systems are in certain circumstances permitted subject to compliance with requirements relating to disclosure of technical documentation, dataset transparency, and regular monitoring and auditing. For high-risk systems that may have an adverse impact on people's safety or their fundamental rights, a mandatory fundamental rights impact assessment has been included for some deployers. High risk is not only defined – the Act also annexes a list of use cases that the Commission will keep up to date. There are also rules to regulate GPAI models – with rules for all general-purpose AI models, and additional rules for GPAI models with systemic risks.

Transparency is a key theme that runs throughout the AI Act. GPAI systems' (such as ChatGPT) transparency requirements include being required to draw up technical documentation, comply with EU copyright law and disseminate detailed summaries about the content used for training the systems.  For high-risk systems, nationals will have the right to receive explanations about decisions based on those high-risk AI systems that impact their rights.

The new regime is complex and potentially administratively onerous which may favour tech industry incumbents (unless AI itself provides the mechanism to cut through the many obligations).  Although the new rules will be implemented at national level, an AI office has been set up to contribute to fostering standards and testing practices and will supervise the implementation and enforcement of new rules on GPAI models. The office comprises five different units, including the Regulation and Compliance Unit to focus on coordinating enforcement, and the Unit on AI Safety to identify systemic risks of GPAI models. Its first tasks will be to prepare guidelines on the AI Act's definition of an AI system and on the prohibited AI practices. It will also help to draw up codes of practice relating to GPAI (due by May 2025) such as on the level of detail required for the summary about the content (ie the main data sets) used for training.

In the period after entry into force but before the AI Act becomes applicable, the Commission is promoting the "AI Pact" whereby AI developers commit, by giving a series of pledges on a voluntary basis, to implement key obligations of the AI Act ahead of the legal deadlines. Those who have already produced internal guidelines or processes to ensure the design, development and use of trustworthy AI systems can share, collaborate and test them with the AI Pact community now.

Those with an active role in the AI system lifecycle, including those that deploy or operate AI, that adhere to harmonised standards will, in the EU, enjoy a legal presumption of conformity with the AI Act. Harmonised Standards are European Standards produced by European standardisation organisations (notably CEN/CENELEC and ETSI). Consequently, the AI standards developed by CEN-CENELEC’s Joint Technical Committee 21 (JTC21) in support of the AI Act will play an important role as AI governance tools. They will be relevant to any AI developers seeking to do business in the EU and will also play an important role in discussions on best practices and regulatory interoperability for AI at the wider international level.

AI system providers looking to comply with the AI Act will also be looking to prepare for their liability risk under the EU Product Liability Directive (approved in March 2024) which explicitly includes AI systems. It’s expected that the changes brought by the new directive will come into force by mid-2026.The proposed EU AI Liability Directive which aims to address liability and damage caused by AI systems, might not now arrive and is under review by the European Parliamentary Research Service. As above, the UK is seeking views on adequacy of current legislation in this area and the most effective way to address liability within the lifecycle and supply chain of AI.

It will be some time before the EU AI Act is fully implemented however existing legislation such as the GDPR, the Digital Services Act and the Digital Markets Act already provide a starting point for how to approach similar principles such as the need for people to be given notice of an AI system being used in a way that could affect them and may help with how to approach accountability, transparency, accuracy, storage limitation, security, and performing risk assessments. The European Data Protection Supervisor, the data protection regulator for the EU institutions, will expand its supervision to the AI Act's implementation and in that role, it has published its "First EDPS Orientations for EUIs using Generative AI" to provide EU institutions, bodies, offices and agencies with practical advice and instructions on the processing of personal data when using generative AI systems.

 
 
Discover more insights on the AI guide

Stay connected and subscribe to our latest insights and views 

Subscribe Here