FRA - European Union Agency for Fundamental Rights

12/04/2025 | Press release | Distributed by Public on 12/03/2025 22:48

FRA report calls for effective fundamental rights assessment of high-risk AI

Press Release
04 December 2025

FRA report calls for effective fundamental rights assessment of high-risk AI

IDOL'foto / Adobestock.com, 2025
Artificial intelligence (AI) brings about many opportunities for our societies and economies. But it also puts fundamental rights at risk, especially in sensitive areas like recruitment or social benefits. Drawing on interviews with providers, deployers and experts, a new report from the EU Agency for Fundamental Rights (FRA) reveals that many in the field of high-risk AI systems do not know how to systematically assess or mitigate these risks. Strengthening awareness and rights-compliant implementation of the AI act is key to protecting people's rights while enabling innovation and creating legal certainty for businesses.

The report 'Assessing high-risk artificial intelligence' examines the development and use of AI in five areas defined as high-risk under the AI Act: asylum, education, employment, law enforcement and public benefits. This includes AI use in job applications and exams, determining eligibility for disability pensions, assessing a child's reading ability, and other applications. These systems must be trustworthy as they can shape important decisions that affect people's daily lives. Building that trust means carefully considering their impact on fundamental rights.

FRA's findings show that AI providers are generally aware of the risks their systems pose to privacy, data protection and non-discrimination. But many do not consider the wider implications for fundamental rights. For example, providers of AI systems used to assess children's reading abilities often fail to assess how their system may affect the right to education.

Approaches to mitigating risks arising from AI uses are fragmented and their effectiveness unproven. For example, many providers focus on human oversight as a mitigation measure, but this is not a blanket solution. People overseeing an AI system may over-rely on its outputs or fail to spot mistakes. Alongside human oversight, other mitigation measures are needed.

Based on in-depth interviews with those working with high-risk AI, the report highlights challenges in defining what qualifies as an AI system and determining when such systems should be classified as high-risk under the AI Act. For example, organisations can apply a so-called 'filter' to exclude systems from the high-risk category if they perform simple or preparatory tasks like file handling. But such systems can still affect people's rights if they produce inaccurate results. If interpreted too broadly, this filter could introduce loopholes and limit the protection of fundamental rights in practice.

To address these issues and ensure a common understanding of high-risk AI systems, FRA proposes to:

  • Interpret the definition of AI broadly: Even less complex systems can contain biases, lead to discrimination or otherwise affect fundamental rights. Therefore, the definition of an AI system should be interpreted broadly to ensure effective fundamental rights protection and increase certainty about the AI Act's application.
  • Avoid 'filter' loopholes: The AI Act's filter for high-risk systems must be applied clearly and narrowly to prevent loopholes that could undermine fundamental rights protection across the EU. The European Commission and national authorities should closely monitor how the filter is used. If needed, the Commission should revise the rules to prevent misuse.
  • Clear guidance for fundamental rights impact assessments: Clear and consistent guidance is needed to ensure that risk assessments for high-risk AI systems effectively protect all fundamental rights - going beyond the rights to privacy, data protection and non-discrimination. Identifying and mitigating fundamental rights risks will promote responsible innovation and support fair competition by helping providers create better and more trustworthy AI.
  • Invest in a better understanding of risks and mitigation measures: The European Commission and EU countries should invest in studies and testing of AI systems, particularly in high-risk areas, for a better understanding of fundamental rights risks and effective mitigation practices. This would make the AI Act easier to implement.
  • Ensure proper oversight: Self-assessments matter, but they only work when backed by independent oversight from well-resourced bodies with expertise in fundamental rights. The EU and its Member States should ensure that oversight authorities have sufficient funding, staff and technical support to effectively oversee the use of AI systems.

Quote from FRA Director Sirpa Rautio:

"Assessing fundamental rights risks is good practice and good for business. It helps build trustworthy AI which works for people. It creates legal certainty for businesses. And it helps to deliver on the EU's commitment to innovate and remain competitive without compromising its own values and standards. Simplifying regulation is welcome, but not at the expense of fundamental rights, particularly in high-risk areas."

Interviews for this report and its contents were finalised before the European Commission issued the Digital Omnibus proposal on 19 November 2025. The report's findings do not directly address the Digital Omnibus proposal.

The report is based on interviews with providers and deployers of AI systems and AI experts in Germany, Ireland, the Netherlands, Spain and Sweden. Insights from focus groups and interviews with rightsholders complement the analysis.

It builds on FRA's previous reports on artificial intelligence and fundamental rights, and on bias in algorithms which highlight how AI affects all fundamental rights, not just privacy or data protection.

For more, please contact: [email protected] / Tel.: +43 1 580 30 642

FRA - European Union Agency for Fundamental Rights published this content on December 04, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on December 04, 2025 at 04:48 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]