UK Government sets out proposals for regulation of AI
The question
What are the UK Government’s plans for the future regulation of artificial intelligence (AI)?
The key takeaway
The Government has introduced its proposals for AI regulation, with the overall aim of establishing a clear set of principles for sector regulators to interpret, whilst also ensuring that the general public is protected from the risks that AI may pose.
The background
In July, the Government set out new plans for the regulation of AI as part of its ongoing strategy to encourage the innovation of emerging technologies in a responsible way. This follows the introduction of other strategies on the topic such as the National AI Strategy, which was published last year, and the Data Protection and Digital Information Bill (Bill). The Bill aims to strengthen the UK’s data laws and provide greater protection to the general public’s privacy and personal data, whilst also reducing the administrative burden on businesses.
In its recent publication, the Government acknowledges the clear benefits of AI for the economy and society, ranging from the use of AI in healthcare to track cancer tumours, to its use in sustainability efforts across various sectors. However, it also makes clear that there are new and significant risks with using AI that need to be addressed as this technology develops.
The development
To summarise, the proposals aim to create a framework that is:
- context-specific: regulation will be tailored to how AI is used in a particular sector and led by sector regulators who understand this
- pro-innovation and risk-based: the focus will be on areas of AI that are high-risk rather than hypothetical or low-risk to encourage businesses to innovate
- coherent: although regulators will need to interpret and implement the principles set out in the proposals, the aim is to make the framework as clear and as easy to navigate as possible
- proportionate and adaptable: the principles will be non-statutory until further notice to allow flexibility in the Government’s approach. Regulators will be encouraged to provide guidance and voluntary measures in the first instance.
There are six new, cross-sectoral principles that existing regulators will need to interpret and implement for AI regulation. These are as follows:
- Ensure that AI is used safely
The publication identified that whilst there are some sectors, such as healthcare and critical infrastructure, where the risk of an impact on safety is more obvious, AI creates the potential for unanticipated safety risks to emerge. Regulators across all sectors will need to take a “context-based approach” when identifying and managing potential risks.
- Ensure that AI is technically secure and functions as designed
AI involves machines learning how to complete tasks and processes from data. Regulators will need to ensure that the functioning, resilience and security of business systems using AI are tested and proven, and that the data used is reliable and representative, to give the general public the confidence that it can be trusted.
- Make sure that AI is appropriately transparent and explainable
The Government acknowledges the difficulties of explaining how AI systems work at a technical level, and in most cases this does not pose a significant risk. However, there will be certain circumstances, such as in legal settings where there is a right to challenge, where the public and businesses may demand transparency to understand how AI decisions are made. Regulators may need to determine appropriate transparency requirements to provide information on AI decisions, the data used in AI systems and to consider accountability.
- Consider fairness
Regulators will have the flexibility to determine the definition of fairness in relation to their sectors, whilst also being expected to decide where fairness is relevant and to create, implement and enforce governance requirements for fairness.
- Define legal persons’ responsibility for AI governance
It is important from a legal perspective for the accountability of outcomes produced by AI to rest with a legal person, whether a business or an individual. Regulators will need to ensure they are easily identifiable and accountable.
- Clarify routes to redress or contestability
Regulators will need to consider how individuals and businesses can contest a decision or outcome created using AI. Although AI systems can produce higher quality outcomes, there is also the risk of biases and issues with training data that should not be overlooked.
The Government plans to set out its position in a White Paper, due to be released later this year, and invited stakeholders to provide their views on the proposals. The 10-week call for evidence has now ended, but the feedback provided by the contributors working across AI will help to shape next steps.
Why is this important?
The proposals mark a significant steer away from the centralised approach taken by the EU. In the Artificial Intelligence Act, the proposed European Artificial Intelligence Board will be the overarching regulatory body responsible for AI governance. The UK approach will give existing regulatory bodies across a range of sectors the power to determine their own regulatory responses to AI. However, they will still be required to act in a manner that is consistent with the general principles.
Any practical tips?
Consider the six core principles and identify any areas in your existing AI systems that may face regulatory action. And when developing new ways to use AI systems in your business, keep the principles in mind and adapt your approach accordingly. Finally, look out for the upcoming White Paper on AI, due to be published in late 2022, which will set out further details of the Government’s plans. Early understanding of where the regulatory focus will lie could prove invaluable when developing long-term AI solutions.
Autumn 2022
Stay connected and subscribe to our latest insights and views
Subscribe Here