ICO opens formal investigations into X’s data processing regarding Grok AI

Published on 27 March 2026

The question

What data protection concerns has the Information Commissioner’s Office (ICO) identified in relation to X’s deployment of the Grok AI system?

The key takeaway

The ICO has launched formal investigations into X’s processing of personal data in respect of Grok, X's AI chatbot. The investigations will focus on whether the system unlawfully generated sexualised manipulated images and whether appropriate safeguards were built into Grok's design and deployment.

The background

On 7 January 2026, the ICO contacted X Internet Unlimited Company (XIUC) and X.AI LLC (X.AI) following reports that Grok had been used to generate sexual images of individuals, including children, in breach of data protection law. The ICO has now opened formal investigations into both entities. According to the ICO, the reported creation and circulation of such content “raises serious concerns under UK data protection law and presents a risk of significant potential harm to the public”. The investigations will examine whether personal data was processed lawfully, fairly and transparently, and whether Grok incorporated adequate safeguards to prevent harmful manipulated images being generated.

The development

The ICO’s investigations, which are ongoing, will assess:

  • lawfulness, fairness and transparency: whether XIUC and X.AI had a lawful basis to process the personal data, whether they did so fairly and if they provided sufficient information to individuals about the processing of their data for these purposes. As the ICO notes: “losing control of personal data in this way can cause immediate and significant harm… particularly where children are involved”
  • design and deployment safeguards: whether Grok’s technical design included appropriate measures to prevent the generation of intimate or sexualised images using personal data
  • highrisk processing obligations: whether the companies identified and mitigated risks associated with synthetic or manipulated imagery involving real individuals, and whether highrisk processing was subject to appropriate safeguards.

The ICO is working closely with Ofcom and other regulators in relation to the data protection and safety compliance of digital platforms. Ofcom and the European Commission have launched separate investigations into X under the Online Safety Act and Digital Services Act respectively, but those relate to contentsafety and systemicrisk obligations rather than data protection. On 23 February 2026, the European Data Protection Supervisor released a Joint Statement on AI-Generated Imagery and the Protection of Privacy. The statement, which represents the views of 61 data protection authorities around the world (including the ICO), highlighted the risks and the requirement for "robust safeguards" around image generation.

Why is this important?

The investigations highlight regulators’ increasing scrutiny of AI systems that process personal data to generate synthetic or manipulated content. For businesses deploying generative AI, the ICO’s focus on lawful processing, transparency, and built‑in safeguards underscores the need for robust risk assessments, particularly where models may generate harmful or privacy-intrusive content. The regulatory activity across the ICO, Ofcom, the European Commission and other data protection regulators also signals a broader trend towards multi‑regulator oversight of AI‑driven platforms, with potential implications for compliance strategies across jurisdictions.

Any practical tips?

Businesses developing or deploying generative AI tools should ensure that high‑risk processing, especially involving real individuals’ images, is supported by clear legal bases, strong safeguards, and effective monitoring. Controllers should carry out due diligence on model‑training inputs, ensure guardrails to prevent harmful outputs have been implemented, and maintain responsive mechanisms for complaints and takedown requests. Given the cross‑regulatory interest in Grok, organisations should also anticipate increased expectations around transparency, risk documentation and cross‑functional governance when rolling out new AI features.


Spring 2026

Stay connected and subscribe to our latest insights and views 

Subscribe Here