The Advertising Association’s 2026 GenAI guide: Key implications for industry
The question
What does the 2026 Best Practice Guide for the Responsible Use of Generative AI (GenAI) in advertising mean for brands, agencies, and media owners?
The key takeaway
The new Best Practice Guide for the Responsible Use of GenAI (the Guide) in advertising establishes eight key principles that brands, agencies and media owners are advised to follow to use GenAI responsibly and effectively. It addresses common questions around disclosure, data use, bias, oversight, wellbeing, brand safety and monitoring, and provides practical guidance on implementation. Although voluntary, it is likely to shape industry standards and regulatory expectations.
The full Guide is available here and there is also an SME version is available here (which contains changes has been adapted from the full guide, focusing on the principles of most relevance for SMEs and making it easier and more proportionate for them to implement).
The background
When GenAI use in advertising exploded in late 2022, the IPA and ISBA responded the following year by publishing 12 industry principles to encourage ethical use. Since then, adoption has only ramped up. By July 2025, ISBA reported that the share of advertisers using GenAI had jumped from 9% to 41% in just over a year. The technology now touches everything from initial ideation through to full creative production, with fully AI-generated commercials, synthetic actors, and infinitely localised campaigns becoming the new norm.
Building on the IPA and ISBA 2023 principles, the Advertising Association has published the Guide, developed under the government and industry-led Online Advertising Taskforce.
The development
The Guide is intended to help advertising practitioners use GenAI in an ethical and effective manner. It is designed to complement existing law (including UK GDPR and the Equality Act 2010) and self-regulatory regime (including the CAP/BCAP Codes). The Guide is targeted at advertisers and brands, advertising agencies, media owners and technology providers and platforms serving the advertising industry. It is voluntary and not legally binding.
The Guide establishes eight principles:
- Ensuring transparency: Disclose AI-generated or AI-altered ad content where appropriate, using a risk-based approach focused on avoiding consumer harm.
- Ensuring responsible use of data: Use personal data for training, targeting and personalisation of GenAI lawfully.
- Preventing bias and ensuring fairness: Design, deploy and monitor GenAI to reduce bias and prevent discrimination.
- Ensuring human oversight and accountability: Put proportionate human review in place before publishing, with clear responsibility for outputs.
- Promoting societal wellbeing: Avoid creating or amplifying harmful, misleading or exploitative content and use GenAI to support consumer protection where possible.
- Driving brand safety and suitability: Assess and mitigate reputational and placement risks, ensuring GenAI outputs align with brand values and safety standards.
- Promoting environmental stewardship: Consider environmental impacts when choosing tools and workflows and favour energy-efficient options where practical.
- Ensuring continuous monitoring and evaluation: Monitor deployed GenAI systems to spot issues (eg performance, bias drift, compliance gaps) and intervene when needed.
Why is this important?
The Guide provides clarity on key issues the industry has been grappling with: how to manage risks like bias and privacy, and when disclosure is required.
Though voluntary, it is likely to establish new 'bare minimum' standards, as brands and agencies start baking 'adherence to the AA GenAI guide' into supplier contracts. Some of the principles could also become a reference point for the ASA when assessing whether advertisements are materially misleading or inadequately substantiated.
Any practical tips?
The Guide provides a useful preview of what future regulation might look like – in that typically best practice guidance emerges first, then if harms persist, principles get formalised into binding requirements. Therefore, as a priority, advertisers should consider implementing a risk-based framework so that the decision to disclose GenAI use is proportionate to the risk of consumer harm or misinterpretation. This should take into account whether the AI-generated content:
- could mislead a reasonable consumer
- could materially affect a transactional decision
- involves real or synthetic people, voices, testimonials or spokespersons
- appears in a context where confusion is more likely
- targets vulnerable audiences.
Advertisers should also:
- Ensure compliance with existing data protection and privacy legislation.
- Evaluate brand reputation risks associated with GenAI content, including alignment with brand voice, cultural sensitivities and placement safety.
- Design AI systems using diverse training datasets and test outputs for bias across different demographics.
- Implement human oversight mechanisms proportionate to the risk level.
- Evaluate whether AI is necessary or whether traditional methods are sufficient.
Separately, platforms should be looking to implement systems that facilitate advertisers in carrying out appropriate human oversight. They are also recommended to use AI proactively to enhance the automated detection of harmful or non-compliant content.
Spring 2026
Stay connected and subscribe to our latest insights and views
Subscribe Here