AI in financial services – regulators' response 'too slow' and impact on retail arrangements subject to new review
In yet another busy month relating to AI in financial services, January saw the publication of a Treasury Committee report which is critical of the speed of financial regulators' response to AI related risks, as well as the launch of a new FCA review (the 'Mills Review') into the impact of AI in retail financial services.
Treasury Committee Report
Like similar reports before it, the Treasury Committee report observes how mainstream AI has become in UK financial services – with three quarters of firms using it, and insurers and international banks leading the way.
That said, the Committee's view is that financial regulators are not acting quickly enough in their response to the risks posed by AI, and that a 'wait-and-see' approach leaves consumers and markets exposed.
Key observations from the report were as follows:
- AI risks causing opaque decisions in credit and insurance, with a potential lack of accountability where AI goes wrong
- A lack of explainability is potentially particularly problematic for senior managers under the SMCR regime, where the need to be able to evidence understanding and control is difficult in this context and could be holding back responsible deployment
- Vulnerable customers are at risk of exclusion from hyper-personalised product design.
- Systemic risks are increasing in line with increased use of the technology, with dependency on a small number of US tech firms for AI (and cloud compute) causing increasing concentration risk. The impact caused by the AWS outage in Autumn last year is given as an example of this
- Unregulated financial advice from AI chatbots is a concern in terms of misleading consumers and misinformation.
- The risk of cyber-attacks is also on the rise, both in terms of volume and scale.
So, what are the recommendations?
- Enhance stress-testing to encompass specific AI risks, as part of or alongside cyber and operational resilience tests
- Make use of the 'Critical Third Parties' regime to designate and regulate critical third parties such as the major cloud and AI solution providers.To date, the UK has yet to confirm which vendors will appear on its list, which is some way behind the European supervisory authorities who published their list of 19 designated third parties in November 2025
- Provide greater practical clarity, alongside existing regimes such as the Consumer Duty, particularly in respect of accountability when AI goes wrong.
On the ground, practical steps that businesses can be taking include the following:
- Map AI use cases against customer outcomes
- Review and update model governance to reflect the use of AI
- Consider risk controls and how they can be evidenced
- Test 'severe but plausible' scenarios
- Build contractual and operational safeguards for cloud and AI services, aligned with operational resilience considerations/requirements
- Engage with the FCA's testing initiatives where relevant, such as its Digital Sandbox and AI live testing as part of the FCA's AI Lab.
The Mills Review
Hot on the heels of the Treasury Committee report into AI in financial services, the FCA launched its 'Mills Review' to consider how AI will reshape retail financial services.
The review seeks views by 24 February on the following 4 themes:
- How AI could evolve in the future
- How that could affect markets and firms
- The impact on consumers
- How financial regulators may need to evolve.
So familiar territory in terms of key questions which have been around for some time - but no suggestion of any move away from the current 'outcomes-based / tech neutral' regulatory approach. Recommendations from the review are due to be reported to the FCA Board in the summer – so watch this space.
Please get in touch with any of RPC's multi-specialist AI team if you need any help on your AI journey.
Stay connected and subscribe to our latest insights and views
Subscribe Here