Developing responsible GenAI – the UK and EU regulatory view

Published on 10 May 2025

The question

What is the UK and EU data protection authorities’ view on ensuring responsible generative AI (GenAI) development and deployment?

The key takeaway

Whilst GenAI brings exciting opportunities across a variety of sectors, data protection authorities are agreed that GenAI falls within the scope of the GDPR and, therefore, any data processing in the context of GenAI must comply with data protection laws.

The background

The use of personal data in the development and operation of GenAI is a significant area of concern for data protection authorities. In January 2024, the UK’s Information Commissioner’s Office (ICO) launched a consultation series on generative AI and data protection. The consultation looked into the lawful basis for web scraping to train GenAI. See our Spring 2024 edition of Snapshots, here.

From a European perspective, the Irish Data Protection Commission sought guidance from the European Data Protection Board (EDPB) to harmonise the regulatory framework across Europe regarding the processing of personal data for GenAI development and deployment.

The development

UK development

The ICO published its much-awaited outcomes report in December 2024, detailing its policy positions on GenAI following a public consultation which garnered over 200 responses. Like the consultation, the report focused on several key areas including: (i) the lawful basis for using web-scraped data to train AI models; (ii) determining the data protection roles of entities in the AI supply chain; and (iii) the engineering of individual rights into GenAI models.

The ICO noted that there was a “serious lack of transparency” around how GenAI uses public data for training models which has led to an erosion of public trust in these systems. The UK regulator emphasised that the time has come for developers to “tell people how [developers are] using their information”, calling on developers to be more transparent about their data practices including clarifying: (i) what personal information is being collected; (ii) how it is being used; and (iii) how individuals and publishers can better understand these processes.

Developers are advised to ensure that personal data used in training GenAI models is obtained lawfully, and that mechanisms for exercising individual rights are built into the models themselves.

Despite the consultation being split into 5 chapters, the ICO acknowledged that there were gaps in the consultation and response. For example, the chapter on lawful basis focused predominantly on web-scraping by AI developers rather than the use of a pre-trained tool by deployers, which would apply to most businesses looking to use AI.

EU development

The EDPB also issued their opinion in December 2024. It addressed issues regarding the anonymisation of AI models, the use of legitimate interest as a legal basis for processing and the consequences of using unlawfully processed personal data in AI development and deployment.

The EDPB guidance suggests that the compliance of AI models must be evaluated on a case-by-case basis, deferring to local data protection authorities’ judgment; it provides a non-exhaustive list of methods for data protection authorities to assess and demonstrate the anonymity of data in AI models, across model design, analysis, testing and documentation.

The guidance also focuses on the validation of the legitimate interest lawful basis for AI model’s development and deployment. It confirmed that legitimate interests could be a valid lawful basis for both developing and deploying AI models, as long as the balancing test favours the data controller’s or a third party’s interests over the rights of data subjects, taking into account mitigation measures. The EDPB has suggested to controllers that publishing their legitimate interest assessments may assist with increasing transparency and fairness.

Why is this important?

Both of these pieces of guidance represent the first times the respective UK and EU regulators have considered the interplay between data protection principles and GenAI specifically. They affirm the regulators’ view that existing requirements under the GDPR apply to AI systems just as they would any other technology. They also indicate the regulators’ areas of concern (eg transparency) and where a regulator may focus their enforcement efforts if they identify non-compliance amongst developers or deployers of AI.

Any practical tips?

Any businesses in scope of the EU or UK GDPR and planning to either develop or deploy AI should review these sources of guidance. Even if it is not anticipated that the tool will process extensive personal data, there will likely be some personal data embedded in the tool through training and therefore will be subject to the GDPR. In the UK, the ICO has indicated that it will be developing a “single set of rules” on GenAI which will provide further detail on the areas identified by the consultation and which the ICO intends to turn into a statutory code of practice. Businesses in the UK should follow the development of this code of practice and ensure their AI systems are in compliance with it.

Spring 2025

Stay connected and subscribe to our latest insights and views 

Subscribe Here