01/10/2025 | Press release | Distributed by Public on 01/10/2025 04:21
How can the EU safeguard fairness and fundamental rights when AI is used for decision making?
As AI systems are increasingly used to augment decision-making across high-stakes sectors like credit lending and recruitment, the EU Policy Lab's design and behavioural insights experts decided to explore the critical issue of human oversight in AI-supported decisions. With support from colleagues in the European Centre for Algorithmic Transparency (ECAT), we tried to better understand how humans interact with AI.
One might intuitively believe that human oversight can serve as a check against AI biases. However, our comprehensive study which combines quantitative and qualitative research tells a different story.
We discovered that human overseers are just as likely to follow advice from AI systems, regardless of whether they are programmed for fairness or not. This suggests that human oversight alone is insufficient to prevent discrimination; in fact, it may even perpetuate it.
Our quantitative experiment involved HR and banking professionals from Italy and Germany, who made hiring and lending decisions influenced by AI recommendations. The results were quite striking: the use of a "fair" AI reduced gender bias for instance, but did not eliminate the influence of pre-existing human biases in decision-making.
Qualitative insights from interviews and workshops with participants and AI experts echoed these findings. Professionals often prioritised company interests over fairness, highlighting a need for clearer guidelines on when to override AI suggestions.
Our conclusions point to the need for a shift from individual oversight to an integrated system designed to address both human and AI biases. True oversight involves more than just programming AI to be fair or relying on individual judgment. It requires a holistic approach such as:
To effectively mitigate biases, decision-makers need tools and guidelines to help them understand when and how to override AI recommendations. Continuous monitoring and evaluation of AI-assisted outcomes are essential to identify and address emerging biases.
Moreover, by giving decision-makers access to data on their performance and potential biases, we can foster a more reflective and responsible approach to AI-supported decision-making.
Earlier last year the EU AI Act was adopted, setting the standards for AI regulation worldwide. Our findings are particularly relevant for making the AI Act operational, highlighting the implications of human oversight. By providing actionable insights, we aim to inform future standards and guidelines that will not only ensure compliance with the AI Act but also surface practical implementation considerations which are equally important.
Dig deeper into the matter with our freshly published report, offering a more detailed account of our study and its implications for the future of AI governance.
Join us in rethinking the role of human oversight in AI. With appropriate regulations and guidelines, we can make sure that the EU can reap the benefits of new technologies while ensuring equitable and responsible decision-making in the digital era.
If you would like to find out more about this project, take a look at some methods we used here Fair decision-making: Can humans save us from biased AI? - European Commission