Online safety regulators investigate X over Grok AI chatbot images

Published on 30 March 2026

The question

In allowing users to create sexualised images of real people using the Grok AI chatbot, has X complied with its duties under UK and EU law to protect its users, including children, from seeing illegal content?

The key takeaway

In January 2026, Ofcom and the European Commission opened separate investigations into possible regulatory breaches by X following reports that the Grok AI chatbot was being used to create sexualised versions of images of real people posted on the X social media platform. The European Commission is also investigating whether X has taken appropriate steps to mitigate harm arising from the use of Grok to power its recommender systems. The investigations are ongoing and if X is found to have breached its legal obligations, the regulators have the power to impose significant fines.

The background

Grok, an integrated AI chatbot developed by the social media company X, is one of a number of chatbots capable of generating text and images. In the UK, social media companies such as X are subject to the provisions of the Online Safety Act 2023 (OSA) which places duties on platforms to protect users from seeing illegal content (see our Spring 2025 edition of Snapshots for more information on the OSA). Ofcom, the UK's independent online safety watchdog, is responsible for investigating non-compliance with and enforcing the OSA. From an EU perspective, platforms such as X fall within the scope of the Digital Services Act (DSA) which places obligations on such platforms to assess and mitigate systemic risks, including of the dissemination of illegal content (see our Winter 2024 edition of Snapshots for more information on the DSA).

The development

On 12 January 2026, Ofcom opened an investigation into X under the OSA in light of widespread reports of the Grok AI chatbot being used to generate undressed images of real people, including sexualised images of children potentially amounting to child sexual abuse material (CSAM) under UK law.

Under the OSA, user-to-user services have an obligation to take proportionate measures to prevent users from encountering priority illegal content, such as non-consensual intimate images andCSAM. Such services have a further obligation to implement "highly-effective" age assurance measures to prevent children from accessing harmful content, including pornography. Ofcom's investigation will seek to establish whether X failed to discharge its legal duties to implement such measures in allowing the Grok AI chatbot to be used in this way.

Two weeks later, the European Commission also announced that it had opened a formal investigation into the use of the Grok AI chatbot on X under the DSA. The DSA imposes obligations on very large online platforms (VLOPs), such as X, to assess and mitigate systemic risks, including in allowing users in the EU to access illegal content on the platform, such as digitally manipulated sexual images. It requires such platforms to implement "reasonable, proportionate and effective mitigation measures" where such risks are identified. Concurrently, the European Commission has extended its previous investigation into X's recommender systems (algorithms which suggest content to users based on past behaviour or preferences), to take account of the platform's shift to a recommender powered by Grok AI. The Commission's investigations will assess whether the platform breached its obligations under the DSA.

X published a statement on its platform on 14 January 2026, in which it stated that it had removed the ability of the [@] Grok account to edit images of real people in revealing clothing. It also announced the implementation of Geoblock technology to prevent users from generating such images in jurisdictions where such content is illegal. X has since restricted the use of image creation and image editing tools via the [@] Grok account to paid subscribers.

Why is this important?

Ofcom's investigation – and the speed at which it was opened – suggests that the regulator is likely to be increasingly confident in using its powers under the OSA. As social media and internet search platforms move towards integrating AI tools into their offering, it is important that they understand the legal risks should sufficient controls not be in place to prevent illegal or harmful content being generated. Ofcom's recent £1m fine against the AVS Group for failing to implement sufficient age assurance checks in respect of pornographic content underlines the strength of the enforcement actions that Ofcom is prepared to take. Under the OSA, Ofcom can issue a fine of up to £18m or 10% of qualifying worldwide revenue.

The European Commission's decision to open formal proceedings under the DSA is also a significant development in that it relieves national regulators in EU member states of their powers to take enforcement action in respect of the potential breaches under investigation. Rather, it allows the European Commission to take enforcement action, such as the issuing of a non-compliance decision. A previous non-compliance decision against X, on 5 December 2025, resulted in a fine of €120m for breaches of transparency obligations under the DSA.

Any practical tips?

Service providers implementing AI functionality in their platforms, either as a user-to-user chatbot or as technology underpinning a recommender system, should ensure that they understand the ways in which this is open to abuse by users and take proportionate measures to mitigate such abuse. The necessary measures will depend on the level of harm and the risk of children accessing harmful material. As a minimum, businesses should ensure that they understand and fulfil their obligations under the OSA in the UK and the DSA in the EU, or risk enforcement action including substantial fines.

 

Spring 2026

Stay connected and subscribe to our latest insights and views 

Subscribe Here