Ofcom publishes an explainer on the regulation of AI chatbots under the Online Safety Act

Published on 30 March 2026

The question

How does the UK's Online Safety Act apply to AI chatbots, and when are chatbot providers required to comply with online safety duties?

The key takeaway

Ofcom has clarified that AI chatbots fall within the Online Safety Act (OSA) only when they operate as user‑to‑user services, search services, or publish pornographic content. Many standalone chatbots will however fall outside the regime unless they enable user interaction, search the internet, or generate pornographic material.

The background

Ofcom has published on its website a short explainer outlining how AI chatbots are regulated under the OSA. Ofcom's explainer responds to increasing public concern about harmful chatbot outputs, including reports of chatbots imitating real people or generating distressing content. Ofcom emphasises that the Act applies only to specific service types and that “chatbots are not subject to regulation at all if they only allow people to interact with the chatbot itself and no other users”. The update follows Ofcom’s recent investigation into X, where it reiterated that not all chatbot-generated content is in scope of the OSA.

The development

Ofcom’s explainer highlights three key points:

1. an AI chatbot is in scope of the Act if it forms part of:

  • a user‑to‑user service, enabling users to share content with each other;
  • a search service, returning results from multiple websites or databases; or
  • a pornographic content service, which must use “highly effective age assurance”.

2. AI chatbots fall outside the Act if they:

  •  only interact one‑to‑one with users;
  • do not search multiple websites or databases
  • cannot generate pornographic content. As Ofcom notes in its Grok update: “images and videos that are created by a chatbot without it searching the internet are not generally in scope.”

3. limits of current powers: Ofcom stresses that it “can only take action on online harms covered by the [OSA]”, and any extension of powers would be for government and Parliament.

Ofcom is supporting government work on potential future regulation of chatbots and will continue monitoring emerging risks as AI tools evolve.

Why is this important?

The explainer provides welcome clarity for businesses deploying conversational AI tools. It confirms that many standalone chatbots, particularly those that do not enable user‑to‑user interaction or internet search, are currently outside the OSA. However, where chatbots are integrated into wider platforms, or where they generate content that can be shared between users, providers may fall squarely within the regime. The update also signals that policymakers are actively considering whether the current framework, which is applicable to the regulation of AI in the UK more generally, adequately addresses AI‑driven harms, suggesting further regulatory developments are likely.

Any practical tips?

Businesses integrating AI chatbot functionality into existing services should assess whether the proposed use of the AI chatbot and its functions would mean that it would be subject to the OSA. Providers of AI chatbots should also consider whether its AI chatbot outputs could be shared on user‑to‑user platforms, triggering online safety duties. Given Ofcom’s ongoing investigations and its emphasis on risk assessment, providers of AI chatbots and businesses that integrate them should document design choices, monitor emerging harms, and ensure age‑assurance measures are in place where relevant.

 

Spring 2026

Stay connected and subscribe to our latest insights and views 

Subscribe Here