European Commission publishes first draft Code of Practice on transparency of GenAI content
The question
What does the EU's first draft Code of Practice on transparency of AI-generated content (the Code) mean in practice for providers of AI systems and AI deployers?
The key takeaway
The Code acts as a broad blueprint for how providers and deployers should mark, detect and label AI-generated and AI-manipulated content under Article 50 of the EU AI Act. Although in its early form and entirely voluntary, the Code provides a good indication of the line of thinking in the EU, and in scope businesses should begin aligning products and internal processes accordingly. The final version is expected to be published in late-Summer 2026.
The background
Article 50 of Regulation (EU) 2024/1689 (AI Act), which comes into full effect on 2 August 2026, introduces transparency obligations for providers and deployers of AI systems. Providers of AI systems generating AI content (audio, image, video or text) must ensure that the content is marked and detectable as AI generated or manipulated. Deployers of AI systems that generate deepfakes or text published for public interest purposes, must disclose that the content has been AI generated or manipulated.
In September 2025, the Commission announced that it would begin developing a Code of Practice on the Article 50 obligations (see our Winter 2025 edition of Snapshots). The drafting process for the Code of Practice has involved input from a multi-stakeholder consultation and independent experts.
The development
On 17 December 2025, the Commission published its first draft of the Code stating the overarching objective of promoting "human-centric and trustworthy AI" whilst ensuring the protection of fundamental rights in the EU against any harmful effects of AI. To achieve this, the Code introduces key commitments and considerations to help in-scope businesses comply with Article 50.
Commitments for providers of AI systems
- Implementation of 'multi-layered' AI marking techniques: providers should use markings that are machine-readable and multi-layered. This will include an "imperceptible watermark" interwoven within the content, making it difficult to remove. Any metadata should also include provenance information and a signature of the GenAI system. Providers should have in place functions allowing deployers of AI generated content to label their output with a "perceptible mark" in line with their own commitments in the Code.
- Free AI detection system: AI systems should include publicly available detection mechanisms allowing users to check whether content has been AI generated or manipulated. All users should be able to understand any results produced by the detection systems and should be provided with the necessary guidance and training material to enable them to make an informed decision on what marking tools they may use.
Commitments for deployers of AI systems
- Consistent disclosure of origin of deepfakes: deployers must implement a common taxonomy for classifying deepfakes that distinguishes between fully AI-generated content and AI-assisted content (content with mixed human and AI involvement). Deployers should also use a common two-letter AI acronym icon as a method of disclosure and commit to the future development of a common interactive EU icon (the draft Code provides examples for potential icons).
- Labelling deepfakes and AI-generated text: deployers must have in place internal processes to identify and label in "a clear and distinguishable manner" deepfake image, audio and video content, applying any necessary exceptions such as artistic or satirical work or law enforcement use. Deepfake labelling that relies on automation should be supported by appropriate human oversight. The Code sets out specific placement rules for AI-generated video (real-time and non-real time), multimodal content, images, audio‑only content and text publications with a public interest purpose. Disclosure is not required for AI-assisted text publications that have undergone human review or editorial control, however deployers should retain appropriate documentation to evidence this. Icons and labels must also meet applicable EU accessibility requirements.
Both providers and deployers are expected to maintain appropriate compliance documentation setting out the practices and processes used, train personnel and cooperate with market surveillance authorities. Deployers must provide secure channels for the public and third parties to flag mislabelled or unlabelled deepfakes and AI‑generated text and remediate any issues without undue delay.
Why is this important?
Although voluntary, the Code is explicitly framed as a means for providers and deployers to demonstrate compliance with Article 50, but is not "conclusive evidence" of compliance. The final version may inform market surveillance authorities' expectations and enforcement criteria. Businesses can expect a more detailed Code in the future. Additional commitments and measures may also be added, and existing ones removed or revised through further stakeholder engagement.
Any practical tips?
In-scope businesses should ensure they understand whether they act as "providers" or "deployers" (or both) under the AI Act and assess their current policies, practices and technology around marking, detection tools and labelling to identify any compliance gaps. Social media platforms that may already employ similar measures to those set out in the Code should begin to assess if they align with the Code.
Interested businesses should engage with future Commission consultations or working groups where possible, to help keep ahead of any future revisions to the Code.
Spring 2026
Stay connected and subscribe to our latest insights and views
Subscribe Here