04/08/2025 | Press release | Archived content
The launch of ChatGPT in November 2022 marked a significant shift in the field of artificial intelligence (AI), changing the way businesses use AI today. While the underlying deep learning technology is not fundamentally new - for example, so-called "deep fakes" have been circulating online for several years - the new models have an unprecedented ability to process and respond to human queries and generate content such as text, images or video.
Insurance companies in Europe have been using AI for several years. EIOPA's 2024 Digitalisation report found that 50% of non-life insurers and 24% of life insurers were already using AI in various areas of the insurance value chain, including pricing and underwriting, fraud detection or claims management.
Most of these use cases were based on "traditional AI systems," supervised machine learning technology, where the AI algorithm is trained on pre-labelled data to generate outputs such as predictions or scores. Drawing on their experience using mathematical models, insurance companies have developed robust governance and model risk management techniques around the use of these AI systems.
However, the commoditisation of Generative AI tools has lowered the barriers to entry for AI adoption and opened the door to a wide range of new use cases in insurance, from developing customised contracts and terms and conditions, to streamlining regulatory reporting and compliance documentation, to extracting information from various unstructured data sources.
Generative AI use cases may have varied levels of automation: they may be used as an augmentative tool in back-office operations, requiring human input/prompts for content generation, or as semi-autonomous "Gen AI assistants", where the tool generates full outputs such as a contract or a "next best action" recommendation, that still require human validation. In the future we would expect to see more autonomous "Agentic AI" applications deployed with no or minimal human intervention, for example by replacing existing chatbots with limited functionality.
Rapid technological developments such as Generative AI require an agile and flexible approach from both insurance companies and supervisors alike.
Insurance companies need to update their governance and risk management frameworks to take into account the unique characteristics of these new tools. For example, the adoption of Generative AI may increase their reliance on a reduced number of third-party service providers. In addition, commonly used data management controls and accountability techniques may need to be re-evaluated to prevent inaccurate or biased results, while maintaining a holistic perspective on all deployed systems and their collective impact.
From a regulatory perspective, it is important to address the risks posed by AI systems while ensuring that stakeholders can reap the benefits. In the EU, the AI Act takes a risk-based approach, classifying AI systems by their risk levels. It recognizes that third party service providers share accountability for ensuring AI systems are deployed responsibly in Europe, especially general-purpose AI systems like Generative AI.
The European Commission's AI Office recently provided guidance on the definition of AI systems. This guidance references mathematical optimization methods and traditional statistical models. It also clarifies that simple models performing only basic data processing are excluded from the scope of the AI Act. These clarifications suggest that the impact of the AI Act on the insurance sector may be more proportionate and targeted than originally anticipated. It is important to note that in insurance the AI Act only considers as high-risk pricing and risks assessments in life and health insurance, while the insurance sectoral legislation continues to apply, whether these systems are considered AI or not.
In view of AI systems that did not exist or were not widely used at the time the sectoral legislation was adopted, EIOPA has recently launched a public consultation on an Opinion on AI governance and risk management. The Opinion sets out high-level principles that supervisors expect firms to develop to ensure the responsible use of AI systems. It highlights risk-based and proportionality considerations in its approach. The principles-based nature of the Opinion ensures sufficient flexibility to adapt to new developments in this area. The Opinion explicitly excludes from its scope prohibited AI practices and AI systems considered as high risk under the AI Act, while staying close to the guidance provided under the AI Act in order to maintain a harmonised approach without overlap.
As digitalisation is a global phenomenon and efforts to harness its potential and mitigate the risks need to be coordinated at the international level, EIOPA is playing an active role in promoting consistency through the International Association of Insurance Supervisors (IAIS), which recently published an application paper on AI.
Thanks to Julian Arevalo for his contribution to this article.