11/05/2025 | News release | Distributed by Public on 11/05/2025 03:19
The chairs and vice-chairs of the Code bring expertise in AI, computer science, security, media, law, and social science, ensuring a multidisciplinary approach and a technical, legal, and societal balance.
AdobeStock © Userba011d64_201
On November 5, the AI Office will host a kick-off plenary to launch the development of the new Code of Practice on the Transparency of AI-Generated Content under the AI Act. This marks the beginning of a 7-month multi-stakeholder process to develop a robust, voluntary framework that will help deployers and providers of generative AI systems to demonstrate compliance with the marking and labeling obligations set out in Article 50 of the AI Act. The objective is to address risks of deception and manipulation arising from synthetic content that becomes increasingly hard to distinguish foster trust and improve the integrity of the information ecosystem.
The code will be drawn up by 2 dedicated working groups, focusing on the obligations set out in Article 50 of the AI Act. Each working group will be led by independent chairs and vice-chairs who will be responsible for drafting the code. The feedback from the multi-stakeholder consultation on transparency requirements for certain AI systems will be considered in this process. The chairs and vice-chairs will also use expert studies commissioned by the AI Office and input from eligible stakeholders participating in the Code of Practice process.
The chairs and vice-chairs are expected to provide strategic leadership and guidance, ensuring that discussions remain focused and productive. They will work closely with the AI Office that will coordinate the process and may be invited to report progress to the AI Board or the European Parliament.
The working groups will focus on the following areas:
Given their central role in shaping the Code, the chairs and vice-chairs were carefully selected based on their expertise in transparency of AI generated content, their independence and ability to fulfil their responsibilities effectively. Other relevant considerations such as gender and geographic balance were also taken into consideration. Collectively, they bring together strong technical, social science, and legal-policy competences, covering the full spectrum of issues related to generative AI transparency. Their expertise spans AI and data science, misinformation, AI marking and detection, digital forensics, algorithmic transparency, media and AI generated content, digital governance, and constitutional law, ensuring a multidisciplinary approach bridging technology, society, and regulation.
The kick-off plenary will provide further details on the drafting process by situating the Code of Practice within the AI Act, outlining objectives, timeline and ways of working with the different actors involved and provide an overview of the multi-stakeholder consultation input received that will be used y the chairs to inform the first draft of the Code.
| Chair: Kalina Bontcheva leads a 20-strong research team at the school of Computer Science at the University of Sheffield and is a visiting Senior Researcher at the Big Data and Smart City Society in Bulgaria. She is also a member of Sheffield's Center for Freedom of the Media. Kalina's research is focused on AI-generated misinformation detection, machine learning methods for misinformation and disinformation detection, online abuse analysis, and hate speech detection; large-scale, real-time analysis of social media content. |
| Vice-chair: Dino Pedreschi is a professor of computer science at the University of Pisa, and a pioneering scientist in social data science, human-centred artificial intelligence and human-AI coevolution. Dino co-leads the Knowledge Discovery & Data Mining Laboratory at Pisa, and the "Human-centered AI" spoke of the Next Generation EU project FAIR - Future AI Research. |
| Vice-chair Christian Riess is a Professor at Friedrich-Alexander Universität Erlangen-Nürnberg, Germany. His expertise is in media forensics and information hiding, using techniques at the intersection of signal processing, machine learning, and security. Among other projects, he was a PI in the DARPA MediFor Program and he currently is a PI in the Research Training Group 2475 "Cybercrime and Forensic Computing" by the German Research Foundation. |
| Chair: Anja Bechmann is professor at Media Studies and Director of DATALAB - Center for Digital Social Research at Aarhus University in Denmark. Her research examines platform collective behavior through large-scale trace data with a focus on challenge to democracy such as misinformation and deepfakes. She is currently PI of the Independent Research Fund Denmark project Social Media Influence, founding PI of the Nordic Observatory for Information Disorders (NORDIS), and former member of the founding executive board of the European Digital Media Observatory (EDMO). She was a member of the EU Commission High-level expert group on disinformation. |
| Vice-chair: Giovanni De Gregorio is the PLMJ chair of Law and Technology at Católica Global School of Law and Católica Lisbon School of Law. His research expertise lies at the intersection of European law, constitutional law and digital policy, with a focus on AI regulation and governance. Giovanni is the author of the monograph 'Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society' (Cambridge University Press, 2022), and he is the corresponding co-editor of 'The Oxford Handbook on Digital Constitutionalism' (OUP, forthcoming). |
| Vice-chair: Madalina Botan is an Associate Professor at SNSPA (Bucharest) and coordinates the Center for Media Studies. Her research covers disinformation, algorithmic propaganda, AI-driven distortions of democratic processes, and assessments of regulatory frameworks for digital platforms and AI deployers. She led the European Digital Media Observatory's evaluations of platform compliance with the Code of Practice on Disinformation and now coordinates the Bulgaria-Romania EDMO hub (BROD). She serves on the IPIE Scientific Panel on Indexing the Information Environment, coordinates research in the Horizon Europe project WHAT-IF, and sits on two EU COST Action management committees. |