EU AI Office begins drafting the Code of Practice on AI Transparency
The question
What should providers and deployers of generative AI systems expect from the EU’s forthcoming Code of Practice on AI transparency?
The key takeaway
The EU AI Office has launched a multi-stakeholder process to draft a voluntary Code of Practice to support compliance with Article 50 of the AI Act, which introduces new transparency obligations for AI-generated and AI-manipulated content. The Code is expected to be finalised ahead of the Article 50 requirements taking effect in mid-2026 and will offer practical guidance for providers and deployers preparing to meet these duties.
The background
Article 50 of the AI Act establishes explicit transparency duties for AI-generated content. These include:
- providers: ensuring that outputs of generative AI systems are identifiable as AI-generated;
- deployers: disclosing the use of AI to generate or manipulate content (e.g. deepfakes);
- public-facing use cases: adding clear disclosure where AI-generated content is used to inform the public on matters of public interest.
To help operationalise these requirements, the EU AI Office has convened independent experts, industry participants and civil society stakeholders to develop a voluntary Code of Practice to support consistent implementation across the EU.
See the Code of Practice on transparency of AI-generated content initiative for more details.
The development
The drafting process began with a kick-off plenary on 5 November 2025 and is expected to run until May-June 2026. Key features include:
- stakeholder participation - the drafting group includes generative AI providers, deployers, large online platforms, technical specialists, academics, and civil society organisations;
- two thematic workstreams:
- provider workstream - developing approaches for marking and detecting AI-generated content across formats (audio, image, video, text). Discussions include interoperability, robustness, watermarking, metadata, and feasibility constraints;
- deployer workstream - focusing on disclosure obligations for deepfakes, AI-generated publications, and content used in a public-information context (e.g., political, civic or news-related communications).
- drafting and consultation cycle - the Code will be shaped through plenaries, technical workshops, iterative drafting, and public consultation on draft versions before finalisation.
Complementary Commission guidance
The European Commission will publish non-binding guidelines in parallel, intended to clarify legal obligations under Article 50 and support consistent interpretation across Member States.
Why is this important?
Article 50 represents a major regulatory intervention aimed at addressing the risks of deception, misinformation and manipulation associated with AI-generated content.
The Code of Practice, although voluntary, is expected to:
- provide a practical roadmap for meeting transparency duties under the AI Act;
- influence the development of European and international technical standards;
- shape what regulators and platforms consider “appropriate measures” for marking and disclosing AI-generated content; and
- become a reference point in future enforcement activity, particularly where organisations rely on bespoke or non-standard transparency mechanisms.
For many organisations, early preparation will be essential: Article 50 obligations sit alongside other EU frameworks, including the Digital Services Act (DSA), the Copyright Directive, media regulation, and broader platform accountability initiatives.
Any practical tips?
Businesses developing or deploying generative AI should begin preparing now by:
- assessing readiness for Article 50 obligations, including labelling and disclosure workflows across all content types;
- evaluating technical options for watermarking, metadata tagging and detection, and understanding their limitations;
- engaging with standard-setting bodies (CEN/CENELEC, ISO/IEC, ETSI) as standards will likely underpin future conformity expectations;
- updating internal governance, editorial and content-moderation processes to ensure transparency measures are applied consistently across markets;
- monitoring the AI Office’s draft versions of the Code throughout 2026 and considering participation in consultations; and
- reviewing interaction with other EU obligations (e.g., deepfake labelling under the DSA) to ensure an integrated compliance approach.
Winter 2025
Stay connected and subscribe to our latest insights and views
Subscribe Here