UK government publishes research briefing into labelling of GenAI content

Published on 30 March 2026

The question

What does the UK government’s new research briefing tell us about labelling GenAI content, and how might this inform future transparency requirements for large online platforms?

The key takeaway

There is currently no UK legal requirement to label GenAI content, but the House of Commons Library briefing and related government note show a clear policy direction towards risk‑based transparency and technical content provenance, closely aligned with the EU AI Act’s forthcoming Article 50 obligations.

The background

On 20 January 2026, the House of Commons Library published a research briefing on “AI content labelling”. The paper explains what AI content labelling is, why it is being used and how it works, with a particular focus on deepfakes and other synthetic media. The briefing outlines current practice in the UK and internationally and surveys policies adopted by online platforms, news organisations, search engines and gaming companies.

The development

The government’s research briefing can be distilled into the following key points:

  • No current UK legal duty to label AI content
    • The government’s copyright and AI work acknowledges the benefits of clear AI labelling, but highlights technical challenges and uncertainty over whether future legislation will impose mandatory labelling.
  • Forthcoming EU AI Act transparency regime
    • Article 50 of the EU AI Act will introduce transparency duties for AI systems that interact with humans, generate or manipulate content, or produce deepfakes.
    • Providers of such systems must mark AI outputs in a way that is machine‑readable and detectable, while deployers must disclose when content has been artificially generated or manipulated.
    • These rules are due to apply from August 2026, but the European Commission has proposed delaying implementation until 2027 and is developing guidance and a code of practice to clarify content labelling obligations.
  • Concepts and objectives of AI content labelling
    • AI content labelling is described as marking content generated or altered by AI so users understand its origin and can better assess reliability, particularly where realistic deepfakes and synthetic media may mislead.
    • The briefing distinguishes between:
      • “impact‑based” labels, which flag content that could be misleading or harmful (such as deepfakes, fraud, or disinformation)
      • “process‑based” labels, which explain how content was created (including use of AI) without necessarily suggesting risk or harm.
  • Technical mechanisms and standards
    • Visible labelling methods include text overlays, captions, UI badges, watermarks and audio notices applied to content, including where AI tools are used in‑platform.
    • Invisible approaches include digital watermarks and embedded metadata in line with emerging standards.
    • The Coalition for Content Provenance and Authenticity’s “content credentials” system (C2PA) is an open protocol that uses cryptography to encode information on content origin and editing history, signposted to users by a “cr” watermark that links to provenance details.
    • Major technology companies and platforms (including Adobe, LinkedIn and Meta) are already adopting such provenance standards.
  • Challenges and user behaviour
    • Practical problems, including inconsistent standards, limited interoperability between watermarking tools, and the possibility that labels or watermarks can be removed or spoofed.
    • Usability concerns, such as how prominent labels should be and whether visible markers might disrupt user experience.
    • Suggestion that generic “AI‑generated” labels tend to reduce users’ perceived accuracy of, and willingness to share, content (including truthful or human‑created material) and do not always improve users’ ability to judge veracity.
    • The impact of labels varies by context, with stronger effects in “high‑stakes” content areas (such as politics or health) than in entertainment or low‑stakes contexts.
  • Current platform and media practice
    • Social media services are combining automated detection technologies, provenance metadata (eg C2PA and IPTC standards) and user disclosures to label AI content, often with more prominent labels for realistic synthetic or sensitive material (eg elections and health).
    • Many news organisations have adopted editorial policies requiring that substantial use of AI in content creation is disclosed, while maintaining human responsibility for accuracy and compliance.

Why is this important?

Although the briefing has no formal legal effect, it is likely to influence UK Parliamentary and regulatory thinking on how AI content should be labelled (including Ofcom’s approach under the Online Safety Act). It also reinforces the trajectory of EU law under Article 50 of the AI Act, underlining machine‑readable marking and risk‑based, user‑facing labels as emerging expectations. For large platforms and AI providers, policymakers are looking to existing industry practice to judge what is technically and operationally feasible. Businesses that already use a combination of standards, automated detection, and nuanced label taxonomies will be better placed to comply with future transparency rules in both the UK and EU.

Any practical tips?

Large digital platforms should treat this briefing as an early signal of policy expectations rather than a new compliance obligation. It supports a risk‑based, dual‑track labelling model:

  • impact‑based labels for potentially harmful or misleading AI content (for example deepfakes, scams or election‑related material)
  • alongside broader process‑based transparency where AI is used in content creation or editing.

In practice, this points towards deploying a multi‑layered technical solution that combines visible labels with invisible provenance signals (such as C2PA content credentials and metadata) to ensure AI outputs are both understandable to users and machine‑detectable.

Businesses should also consider how label wording, placement and granularity vary between low‑stakes and high‑stakes content. Aligning global product design with the EU AI Act’s upcoming Article 50 requirements and documenting how current labelling decisions reflect this kind of evidence will help demonstrate that organisations are keeping pace with emerging “good practice” in anticipation of future regulation.

 

Spring 2026

Stay connected and subscribe to our latest insights and views 

Subscribe Here