12/04/2025 | Press release | Distributed by Public on 12/03/2025 22:48
The report 'Assessing high-risk artificial intelligence' examines the development and use of AI in five areas defined as high-risk under the AI Act: asylum, education, employment, law enforcement and public benefits. This includes AI use in job applications and exams, determining eligibility for disability pensions, assessing a child's reading ability, and other applications. These systems must be trustworthy as they can shape important decisions that affect people's daily lives. Building that trust means carefully considering their impact on fundamental rights.
FRA's findings show that AI providers are generally aware of the risks their systems pose to privacy, data protection and non-discrimination. But many do not consider the wider implications for fundamental rights. For example, providers of AI systems used to assess children's reading abilities often fail to assess how their system may affect the right to education.
Approaches to mitigating risks arising from AI uses are fragmented and their effectiveness unproven. For example, many providers focus on human oversight as a mitigation measure, but this is not a blanket solution. People overseeing an AI system may over-rely on its outputs or fail to spot mistakes. Alongside human oversight, other mitigation measures are needed.
Based on in-depth interviews with those working with high-risk AI, the report highlights challenges in defining what qualifies as an AI system and determining when such systems should be classified as high-risk under the AI Act. For example, organisations can apply a so-called 'filter' to exclude systems from the high-risk category if they perform simple or preparatory tasks like file handling. But such systems can still affect people's rights if they produce inaccurate results. If interpreted too broadly, this filter could introduce loopholes and limit the protection of fundamental rights in practice.
To address these issues and ensure a common understanding of high-risk AI systems, FRA proposes to:
Quote from FRA Director Sirpa Rautio:
"Assessing fundamental rights risks is good practice and good for business. It helps build trustworthy AI which works for people. It creates legal certainty for businesses. And it helps to deliver on the EU's commitment to innovate and remain competitive without compromising its own values and standards. Simplifying regulation is welcome, but not at the expense of fundamental rights, particularly in high-risk areas."
Interviews for this report and its contents were finalised before the European Commission issued the Digital Omnibus proposal on 19 November 2025. The report's findings do not directly address the Digital Omnibus proposal.
The report is based on interviews with providers and deployers of AI systems and AI experts in Germany, Ireland, the Netherlands, Spain and Sweden. Insights from focus groups and interviews with rightsholders complement the analysis.
It builds on FRA's previous reports on artificial intelligence and fundamental rights, and on bias in algorithms which highlight how AI affects all fundamental rights, not just privacy or data protection.
For more, please contact: [email protected] / Tel.: +43 1 580 30 642