Cyberflashing crackdown under the UK's Online Safety Act

Published on 02 January 2026

The question

What obligations will tech firms face once cyberflashing becomes a “priority offence” under the Online Safety Act?

The key takeaway

The UK Government intends to classify cyberflashing as a priority offence under the Online Safety Act (OSA). This will require in-scope tech companies to take proactive and proportionate steps to reduce the risk of users encountering unsolicited sexual images. Failure to comply could result in enforcement by Ofcom, including fines of up to 10% of global annual turnover and, in extreme cases, service-blocking in the UK.

The background

Cyberflashing - sending unsolicited sexual images for the purpose of causing alarm, distress or sexual gratification - became a criminal offence in England and Wales in January 2024 following amendments introduced by the OSA.

In a press release on 29 September 2025, the Technology Secretary announced that the Government intends to designate cyberflashing as a priority offence. The move reflects growing concerns about online safety, supported by research indicating that around one in three girls under 18 have received unsolicited sexual images online.

A priority-offence designation under the OSA activates enhanced obligations for regulated services, including mandatory risk assessments, proactive mitigation measures, and ongoing safety-by-design practices.

The development

When cyberflashing becomes a priority offence:

  • user-to-user and search services in scope of the OSA must assess the likelihood of users encountering unsolicited sexual images on their platform;
  • platforms must implement proportionate, preventative measures to minimise that risk;
  • Ofcom’s Codes of Practice (or alternative compliant measures) will guide what “proportionate” looks like in practice.

The Government has highlighted possible measures, including automated detection of harmful images, stronger moderation, clearer content policies, and restricting how images can be sent by new or unverified users. While these are examples, not mandatory tools, they signal regulatory expectations around effective and proactive design choices.

Ofcom will supervise compliance and may use its full enforcement toolkit where risks are not adequately addressed.

Why is this important?

The designation elevates cyberflashing to one of the OSA’s most serious categories of harm, meaning:

  • platforms must shift from passive moderation to active prevention;
  • Ofcom will expect demonstrable governance, testing, and monitoring of mitigation measures;
  • enforcement risks increase significantly, including fines of up to 10% of global turnover and potential access restrictions.

Beyond regulatory exposure, there is a commercial upside: platforms that visibly protect users from harmful behaviour build trust, retention, and brand resilience.

Any practical tips?

There is no single prescribed solution, but tech businesses should begin preparing by:

  • conducting a targeted risk assessment of where and how unsolicited images may be shared on their service;
  • testing proportionate technical mitigations, such as opt-in image sharing for non-contacts, friction for new accounts, or automated detection tools (with strong privacy controls);
  • reviewing user policies and enforcement pathways to ensure clear consequences for offenders;
  • documenting governance, decisions, and safety measures, recognising that Ofcom may request evidence;
  • monitoring Ofcom guidance and Government statements to understand what will qualify as “effective” mitigation.

Early action will help organisations demonstrate readiness ahead of formal designation and reduce future enforcement risk.

 Winter 2025

Stay connected and subscribe to our latest insights and views 

Subscribe Here