CMA publishes updated principles for social media platforms on reviews and endorsements
The question
How does the new CMA guidance impact the obligations on social media platforms to tackle hidden advertising and fake reviews?
The key takeaway
The new guidance follows the coming into force of the consumer protection provisions in the Digital Markets, Competition and Consumers Act (DMCCA). It clarifies the principles the CMA expects social media platforms to follow to ensure its users comply with the relevant rules around hidden advertising and fake reviews.
The background
As reported in our Spring 2025 Snapshots, the DMCCA came into effect on 6 April 2025 with the aim of preventing unfair commercial practices in digital markets. The Competition and Markets Authority (CMA) is a pivotal part of this strategy, having been granted new direct enforcement powers to investigate and issue fines for non-compliance.
The CMA’s accompanying guidance to the DMCCA’s unfair trading provisions, published on 4 April 2025, set out the 32 practices prohibited under the new legislation - see our Spring 2025 Snapshots for further information.
Hidden advertising is one such practice, where content that has been paid-for, endorsed or incentivised in any other way is not disclosed as an ad by the creator. Platforms have a duty now to prevent undisclosed ads from appearing on their sites.
The DMCCA also imposes a number of prohibitions in relation to consumer reviews - including publishing fake or undisclosed incentivised reviews, and/or not taking reasonable and proportionate steps to prevent them. The fake review provisions of the DMCCA are also the subject of standalone CMA guidance. Businesses, including social media platforms, must now proactively deter and prevent fake reviews, and take steps to monitor and remove any that appear on their site.
The development
The CMA have issued further guidance for social media platforms restating and clarifying the six principes all platforms should follow to ensure compliance with the fake review provisions of the DMCCA:
- Users should be clearly informed if content is incentivised, and fake reviews are not permitted.
- Platforms should provide content creators with tools to easily and effectively label incentivised content.
- Platforms should take appropriate, proportionate and proactive steps and use available technology to prevent fake reviews.
- Users should be able to easily report suspected hidden advertising and fake reviews.
- Platforms should facilitate legal compliance by brands.
- Platforms should enforce their terms and conditions and take appropriate action against non-compliance.
The CMA makes clear that these principles are not rigid rules, but rather an outline of what platforms should be doing. It also notes that with the rapid development of new technology, platforms should keep compliance under regular review.
Why is this important?
The CMA now have the power to impose and enforce fines of up to the higher of £300,000 or 10% of global annual turnover on businesses who fail to comply with the regulations. The CMA previously highlighted in its approach to consumer protection that it is treating tackling fake reviews as a priority, so enforcement action is likely against any platform found to be in breach. For more information on this guidance, see our Spring 2025 Snapshots. Initially, the CMA granted a three-month grace period within which it would work with platforms to update their policies and processes in line with the new rules. That period ended on 6 July 2025, and business can now expect the CMA to start acting against non-compliance. Indeed, at the end of July 2025, the CMA undertook a review of the websites of more than 100 businesses and wrote to more than half (54 in total) to warn them that they could be failing to comply with the rules. A reprimand from the CMA for undisclosed ads or fake reviews could not only lead to financial sanctions but also significantly damage consumer trust.
The updated social media platforms principles indicates a clear expectation from the CMA for platforms to conduct their own contextual risk assessments for the likelihood of banned content appearing on their sites, including considering factors such as the type of platform (eg sales platforms will have a higher risk of misleading reviews than professional networking platforms), sources of content (eg users vs institutional publishers), and the impact of content on users.
Any practical tips?
Social media platforms should ensure they have conducted proportionate risk assessments to identify where and how banned content may appear on their sites. Platforms should also ensure that appropriate tools are in place for content creators to accurately label content, for the platform to identify banned content at the point of posting, and for users to flag any posts or reviews they believe to be hidden advertisements. Spot checks are recommended where technology such as AI is used to detect non-compliant content, to ensure accuracy.
Autumn 2025
Stay connected and subscribe to our latest insights and views
Subscribe Here