Generative AI EU Market Survey – key takeaways from EIOPA's report

27 February 2026. Published by Alastair Mitton, Partner and Kristin Smith, Trainee Solicitor

Not long after the publication of a UK Treasury Committee report into AI in financial services (see our previous update here), EIOPA (the European Supervisory Authority responsible for insurance sector oversight) has published the results of its survey into the use of generative AI (GenAI) which looks at outlook, use cases and risk management.

The report is a particularly useful reference point as to the state of play in the sector (despite not involving the UK) as it contains a whole range of information around:

  • Market adoption and strategic importance
  • Primary use cases and focus areas
  • Automation levels and 'agentic' AI
  • Sourcing and data strategies
  • Key drivers and benefits
  • Principal risks and challenges
  • Governance, policies and risk management

This will be of interest to any teams involved in the use or governance of AI solutions within insurance businesses. Particularly with a view to benchmarking where they are on their AI journey, as compared to others in the market, how businesses are seeking to meet the challenges posed by the use of AI in a regulatory setting, and how those ideas might be applied to their own examples.

Practical takeaways

Practical takeaways for those short on time:

  • If you're not using (or experimenting with the use of) GenAI in one form or another, that is increasingly becoming an outlier position
  • If you don't already have an AI policy, you should be working to put one in place (over half respondents do)
  • Hallucinations are common and referred to as the main concern but there are steps you can take to try to manage this risk (see point below)
  • Practical controls such as prompt logging enable a useful audit trail and also help with challenges around explainability. So even if you can't see 'inside the box', you can compare output vs input and use other linear techniques to sense check results
  • You are still your business' subject matter expert regardless of how advanced AI solutions are becoming. What you know about your business, its regulation and its approach to governance is a key part of the mix. In fact, there is an awful lot you can do with 'retrieval-augmented generation' using the power of LLMs to do the heavy lifting, with its reference point tied to specific domain expertise which you provide as a guide
  • Letting agentic AI loose outside of very closely controlled situations would be a very bold choice as compared to the rest of the market
  • DORA and, perhaps strangely to a lesser extent, the AI Act are seen as useful guardrails in terms of contracting requirements to apply in supply chains.

Summary of findings

A slightly more detailed summary of the main findings themselves is as follows:

  • Around 65% of European insurers already use GenAI, with a further c23% planning to do so within 3 years (showing rapid take up within the sector)
  • Almost 50% of respondents to the survey have a dedicated AI policy, double the 2023 level. Among GenAI users, that increases to almost 70%, with 16% expecting to develop one within 3 years
  • 64% of use cases are back-office/internal, with an internal efficiency-first approach seeming to be the trend
  • 'Human in the loop' remains a core principle, with limited appetite to let agentic AI loose (particularly in the context of customer facing workflows)
  • 'Retrieval-augmented generation' is one of the most common ways of using proprietary data as a reference point in the context of employing the power of large language models (LLMs)
  • 'Hallucinations' (plausible but inaccurate or fabricated output) are seen as the top risk – with a range of techniques applied to try to spot where that occurs
  • Governance frameworks are focussing on transparency of processes, documentation and audit trails and managing dependencies on third-party LLMs. Prompt and output logging, and consistency checks being key parts of these efforts
  • EU digital resilience frameworks are key in managing supply chain risk, with DORA seen as particularly important in maintaining sufficient oversights of critical third parties (which many of the large GenAI providers will perhaps inevitably become, if they aren't already). 

Please get in touch with any of RPC's multi-specialist AI team if you need any help on your AI journey.

Stay connected and subscribe to our latest insights and views 

Subscribe Here