FRA - European Union Agency for Fundamental Rights

04/20/2026 | Press release | Distributed by Public on 04/21/2026 01:17

Privacy Symposium Conference 2026

Speech
20 April 2026

Privacy Symposium Conference 2026

Speaker
Sirpa Rautio
FRA Director joins the Privacy Symposium 2026 opening session delivering a keynote address on AI Governance. The speech takes place on 20 April in Venice. The session covers the practicalities of governing autonomous systems, the challenge of aligning international standards, and how to ensure that AI regulation supports innovation while protecting fundamental public interests.

CHECK AGAINST DELIVERY

Dear Mr Ziegler, dear colleagues,

I am delighted to be here today, on the opening day of the Privacy Symposium, which my agency has had the pleasure of engaging closely with over the last few years. I would like to commend the organisers for bringing together this international panel, because I think we can all agree that this task - the governance of artificial intelligence - is one that can only be truly successful if we have cross-border, coordinated cooperation.

The EU Agency for Fundamental Rights, or FRA - as we are known, is the European Union's independent human rights agency. Through our expertise, evidence and data, we provide advice on fundamental rights to EU institutions and Member States as they implement EU law.

The regulation of new technologies for the purpose of protecting fundamental rights has many positive consequences, including enhancement of the quality of products and increasing consumers' trust. With these positive consequences in mind, we should dispel the notion that regulation should be perceived as a burden.

In the EU, fundamental rights compliance is not optional - it is a legal obligation. Respect for fundamental rights, alongside democracy, rule of law, dignity, equality and freedom is found in Article 2 of the Treaty on European Union, and further cemented in the Charter of Fundamental Rights - an instrument to which every Member State is a party by virtue of their membership of the European Union. There can be no derogations nor reservations to those rights contained in the Charter, and every EU law, policy and activity must comply.

Given my inclusion in this particular panel, it is only natural that I will focus on the EU's regulatory framework. I will outline the role FRA plays within this framework, including through our extensive research in this field, on topics such as assessing high-risk AI and digitalising justice. This is alongside FRA research on the use of AI in remote biometric identification for law enforcement purposes, and on AI uses in supporting decisions in asylum and immigration procedures.

The EU Charter of Fundamental Rights provides the statutory foundation for the fundamental rights protection framework in the EU, and this remains the overarching fundamental rights roadmap when it comes to the development, use and regulation of AI. This is reflected in the AI Act, which explicitly states that one of its aims is to ensure the high level of protection of fundamental rights already enshrined in the Charter.

To this end, the AI Act provides for a number of specific safeguards that support this protection. In this regard, I would like to highlight one key aspect: that is an assessment of AI's impact on fundamental rights. Why is such an assessment important?

Consider the example of an AI recruitment system that turns out to be biased and prefers male candidates over female ones. Consider an AI system used to determine social benefits that turns out to be so opaque that people cannot understand how a decision was determined, or how to challenge it. Or consider an AI system that is used to identify people but collects more data than needed, or categorises people based on ethnic origin for further processing, thus resulting in unjustified interference with their privacy rights.

These examples illustrate the critical need for fundamental rights impact assessments. They are designed to help providers and deployers identify the risks of using a particular system, to mitigate those risks, and to improve their products and systems, thereby making a system more trustworthy.

Under the AI Act, both providers and certain deployers of high-risk AI systems will have to do such assessments in the future. FRA's research has shown that significant tailored guidance will be required for those carrying out such assessments. This is because companies and administrations rarely have the expertise to effectively assess fundamental rights, even if they have the will to do so. Assessments need to address the main cross-cutting fundamental rights concerns, namely the impact on privacy and data protection, but also equality and non-discrimination, and access to effective remedies. However, we all know, AI can affect virtually every human right, depending on the context of use. Appropriate guidance and expertise can support the focused analysis required to identify which rights might be impacted in a given case.

More guidance is also needed on prevention and mitigation measures against rights infringements - our research has shown how current approaches are often fragmented and fall short. We see a strong reliance on human review as the main measure to limit risks. Its effectiveness depends, however, on how it is designed and applied. If a system is inherently flawed - for example by exhibiting bias that can lead to discrimination - human oversight and review might not be sufficient to address such risks properly. It cannot be the sole solution to rights compliance and needs to be complemented by other measures. My Agency therefore calls for establishing a robust evidence base that allows for a better understanding of fundamental rights risks and effective mitigation practices.

What is more, in order to make these assessments comprehensive, we need legal, technical and social experts working on them. Providers and deployers should also listen to people affected by the systems and their representative organisations to ensure the assessments are designed and implemented correctly. Engaging stakeholders and civil society at the very earliest opportunity is a crucial support in such an exercise.

FRA is currently supporting the AI Office in the development of a template for fundamental rights impact assessments under the AI Act. The agency has also recently joined the European standardisation committee that develops technical standards for the AI Act. We see the involvement of our agency as a sign of good will and commitment to the full integration of fundamental rights in the regulation of new technologies.

To move on to another point - while self-assessments are important, they can only properly function when combined with effective external oversight by independent bodies that are sufficiently resourced and possess the necessary fundamental rights expertise. A significant role is foreseen under EU law for national human rights protection structures - namely data protection authorities, equality bodies, national human rights institutions, ombuds institutions and consumer protection bodies.

The AI Act, under Art. 77, empowers such bodies to play a role in the oversight of AI. Many of these institutions must carry out this new role without any increase in human or financial resources, and many of them operate within a landscape that is fragmented, with many actors sharing various responsibilities. I look forward to welcoming many Art 77 actors to FRA's premises in Vienna in the autumn, where we will discuss these impending challenges and share innovations on how to carry out this new role.

To conclude, I would like to stress the importance of international cooperation. AI and its regulation are global phenomena. We must learn from each other's experiences. In this regard, it is wonderful to hear about Taiwan's, Canada's and Spain's efforts. I am curious to learn more about the Parliament's work on the Digital Omnibus, and I welcome the Council of Europe's AI Treaty and the work of the committee on new and emerging technologies, in respect of which FRA is happy to continue cooperating.

The AI Act - and other EU law, such as data protection, equality and consumer protection laws - offer a great opportunity to make better AI that will be beneficial to everyone. The benefits come from providing legal certainty and guidance to developers and users of such technologies on the one hand, and setting up governance and oversight structures or further building on existing ones on the other. At FRA, we will continue to use our evidence and expertise to promote these benefits, and to show that well-established, tested human rights norms and standards can withstand - and enhance - technological innovation. I wish all participants of the Privacy Symposium productive discussions around how the regulation of new technologies can support the protection of fundamental rights and, as a consequence, a higher standard of living in the EU and beyond.

Thank you.

FRA - European Union Agency for Fundamental Rights published this content on April 20, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 21, 2026 at 07:17 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]