02/02/2026 | Press release | Archived content
2.2.2026 - (11361/2025- C10-0183/2025 - 2025/0136(NLE)) - ***
Committee on the Internal Market and Consumer Protection
Committee on Civil Liberties, Justice and Home Affairs
Co-Rapporteurs: José Cepeda, Paulo Cunha
on the draft Council decision on the conclusion, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law
(11361/2025 - C10-0183/2025 - 2025/0136(NLE))
(Consent)
The European Parliament,
- having regard to the draft Council decision (11361/2025),
- having regard to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS 225),
- having regard to the request for consent submitted by the Council in accordance with Articles 114 and Article 218(6), second subparagraph, point (a)(v), of the Treaty on the Functioning of the European Union (C10-0183/2025),
- having regard to Rule 107(1) and (4) and Rule 117(7) of its Rules of Procedure,
- having regard to the letter from the Committee on Culture and Education,
- having regard to the recommendation of the Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs (A10-0007/2026),
1. Gives its consent to the conclusion of the Convention;
2. Instructs its President to forward its position to the Council, the Commission and the governments and parliaments of the Member States and to the Secretary General of the Council of Europe.
a. Background
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the "Convention", CETS No. 225) is the first international legally binding treaty specifically devoted to artificial intelligence (AI) governance. The Convention seeks to guarantee that every stage of an AI system's lifecycle respects and protects fundamental rights, democracy, and the rule of law, ensuring that no activity undermines these values, while still fostering an environment that supports technological progress and innovation.
Work on the Convention began in 2019 with ad hoc Committee on Artificial Intelligence, continued under the Committee on Artificial Intelligence in 2022, and involved 46 Council of Europe member states, numerous observer and non-member states, and 68 international civil society, academic, and industry representatives.
On 18 August 2022, the European Commission issued a Recommendation for a Council Decision authorising the opening of negotiation on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, pursuant to Article 218 TFEU.
In October 2022, in its Opinion 20/2022, the EDPS welcomed the Council of Europe's objective of creating the first legally binding international AI instrument, supported EU negotiations, noted its alignment with the then proposed AI Act, and stressed that the Convention should strengthen fundamental rights and provide strong safeguards for those affected by AI.
The Convention was adopted by the Council of Europe Committee of Ministers on 17 May 2024 and opened for signature in Vilnius on 5 September 2024. The Union signed the Convention following the adoption of Council Decision (EU) 2024/2218 of 28 August 2024 on the signing, on behalf of the European Union, of the Convention.
In parallel, Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (the "AI Act") was adopted by the Parliament and Council on 13 June 2024, published in the Official Journal on 12 July 2024 and entered into force on 2 August 2024. It establishes a comprehensive, risk-based framework for the development, placing on the market, putting into service and use of AI systems in the internal market in compliance with fundamental rights.
The Convention focuses on applying existing human-rights, democracy and rule-of-law obligations to AI activities, and was drafted taking into account a range of international instruments, including the OECD AI principles, Recommendation CM/Rec(2020)1 on the human-rights impacts of algorithmic systems, the UNESCO Recommendation on the Ethics of Artificial Intelligence and the G7 Hiroshima AI Process, including the G7 International Guiding Principles on AI and the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems.
In addition, the Convention contributes to safeguarding free and healthy development of children, who are among the groups most exposed to the rapid expansion of AI-driven tools. This particularly relates to conversational systems and AI-powered applications targeted at minors, whose psychological, cognitive and emotional vulnerability requires robust safeguards. The implementation of the Convention in the Union reflects a strong commitment to preventing manipulative design, undue influence and unsafe AI interactions involving children, in accordance with and the UN Convention on the Rights of the Child of 20 November 1989 and to General Comment No. 25 (2021) of the UN Committee on the Rights of the Child of 2 March 2021 on children's rights in relation to the digital environment.
b. Implementation through EU law
The principles and obligations in the Convention are implemented in the Union by a coherent body of legislation, including in particular:
- the AI Act, which provides binding requirements on data governance, technical documentation, human oversight, robustness, cybersecurity, transparency and post-market monitoring for high-risk systems, and prohibits a series of clearly specified unacceptable practices;
- the GDPR, which ensures a high level of protection for personal data, including well-established data protection principles, the concept of data protection by design and by default as well as rights for individuals, and requires risk-based safeguards, including, data breach notifications, data-protection impact assessments for high-risk processing or appropriate safeguards for international transfers of personal data;
- the Union's non-discrimination acquis, which prohibits discrimination on a wide range of grounds and is complemented by AI-Act requirements on data quality and bias mitigation in high-risk systems;
- sector-specific legislation, including Union rules on product safety, product liability and political advertising, which interact with the AI Act's risk-based framework.
One particularly important bridge is the Fundamental Rights Impact Assessment (FRIA) foreseen in the AI Act for certain high-risk deployments by public authorities and entities providing public services. This obligation directly reflects the Convention's emphasis on ex ante risk and impact management for human rights, democracy and the rule of law.
The Convention foresees risk and impact management requirements, including assessments in respect of actual and potential impacts on human rights, democracy and the rule of law, in an iterative manner. In addition, the Convention requires the establishment of sufficient prevention and mitigation measures and the possibility for authorities to introduce ban or moratoria on certain application of AI systems.
Under the Convention, activities within the lifecycle of AI systems must comply with a number of fundamental principles, including human dignity and individual autonomy, equality and non-discrimination, respect for privacy and personal data protection, transparency and oversight, accountability and responsibility, reliability and safe innovation. As part of the remedies, procedural rights and safeguards, the Convention foresees the documentation of relevant information regarding AI systems and their usage and to make it available to affected persons to enable them to challenge the decision(s) made through the use of the system or based substantially on it. Furthermore, the Convention requires an effective possibility to lodge a complaint to competent authorities and effective procedural guarantees. Finally, the Convention enshrines the provision of notice that one is interacting with an AI system.
Article 22 of the Convention explicitly allows Parties to grant wider protection measures than those provided for in its provisions. In practice, the Convention operates as an international baseline, while the AI Act and the wider Union acquis already set a higher, more detailed level of protection and harmonisation within the internal market.
Moreover, the transparency requirements under the Convention are key to address emerging threats such as the malicious creation and dissemination of deepfakes, which pose serious risks to individual dignity, public trust and democratic stability. Ensuring that AI systems comply with strict ethical principles, are auditable, and remain aligned with democratic values is indispensable to protect the most vulnerable and to safeguard electoral processes and the integrity of public debate. The Union's implementation of the Convention therefore contributes to a culture of responsible innovation in which technological progress is fully compatible with societal trust, transparency and the preservation of democratic institutions.
e. Position of the Co-Rapporteurs
The Co-Rapporteurs consider the conclusion of the Framework Convention to be an important step forward in the EU's efforts to promote the safe and human rights compliant uptake of AI. By adopting the Convention, the EU would anchor the provisions under the AI Act and its AI policies within an internationally recognized, legally binding framework, strengthening the legitimacy and consistency of a human rights-based approach. This ensures that AI development and deployment are aligned with core European values while considering economic or market aspects. The Convention emphasizes proactive compliance with human rights, democracy, and rule-of-law principles, promoting responsible innovation.
Helping to prevent risks of human-rights violations while fostering trust in AI systems is critical for public acceptance and innovation uptake. The Convention provides a framework that balances the protection of fundamental rights with support for technological progress. Adopting the Convention would formalise the EU's commitment to safe, fundamental rights compliant and ethical AI, while ensuring responsible innovation in line with fundamental European values.
The Co-rapporteurs underline that the Union's approach to fundamental rights in the digital sphere is anchored in the principle that everyone has the right to privacy and to the protection of their personal data, including the control on how their personal data are used and with whom they are shared, and that such data must be processed in accordance with one or more legal bases under applicable data protection rules. Personal data processed for the development and operation of AI systems must therefore respect this logic.
Acknowledging that it is necessary and proportionate in light of the important role that the EU's plays at the international level for the safe and trustworthy deployment of AI and recognizing the importance of fostering an innovation-friendly environment conducive to its development, the responsible use of AI fundamentally depends on societal trust. While recognising the benefits AI may offer, it is further observed that AI may be employed in ways that pose risks to democracy, the integrity of democratically elected institutions, freedom of expression, and the exercise of fundamental rights. It is highlighted that the success of AI innovation and deployment is contingent upon the establishment of such trust, and it is emphasised that AI systems will achieve higher quality when designed in full respect of fundamental rights, including data protection, as adherence to these principles enhances the reliability and purpose-specific effectiveness of AI.
This Convention represents an opportunity for Europe to project a vision of artificial intelligence fully aligned with our democratic values while enabling safe innovation: AI that reinforces human autonomy, preserves electoral processes, protects vulnerable groups, and prevents any form of technological power concentration incompatible with an open society. Misused AI can amplify hybrid threats, affect critical infrastructures, or facilitate automated cyberattacks. The Convention recognises the need for robust and proportionate measures to address these risks, in line with the Union's priorities for democratic resilience and strategic autonomy.
Given the growing impact of generative AI in the large-scale dissemination of manipulative content, the Convention provides an international basis for demanding transparency, traceability and accountability from actors developing and deploying systems capable of influencing public debate and the integrity of democratic processes.
Furthermore, the co-rapporteurs emphasise that artificial intelligence systems posing high-risk to humans need to be designed and developed in such a way, including with appropriate human-machine interface tools, that can be effectively overseen by natural persons during the period in which they are in use. While the Convention includes no such explicit requirement on human oversight, the AI Act provides a clear example of how such oversight may be exercised. This is relevant, inter alia, in the context of emergence of agentic AI and the potential risk it can pose.
The Union encourages signatories of the Convention to globalise this standard and ensure international alignment with the Union's human-centric, trustworthy approach to AI.
In conclusion, the Co-Rapporteurs find that the conclusion of this Framework Convention is a positive development to demonstrate the Union's commitment to the save development and deployment of AI and to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation. By ratifying this Convention, the Union demonstrates that it is possible to lead the technological revolution grounded in democratic values. AI must serve people, strengthen our open societies, and consolidate a European model based on human dignity, transparency and accountability, as well as enable safe and stable economic development. This Parliament firmly supports that path.
In light of the above, the Co-Rapporteurs recommend that Parliament endorse the draft Council Decision.
Pursuant to Article 8 of Annex I to the Rules of Procedure, the rapporteurs declare that they included in their report input on matters pertaining to the subject of the file that they received, in the preparation of the report, prior to the adoption thereof in committee, from the following interest representatives falling within the scope of the Interinstitutional Agreement on a mandatory transparency register[1], or from the following representatives of public authorities of third countries, including their diplomatic missions and embassies:
|
1. Interest representatives falling within the scope of the Interinstitutional Agreement on a mandatory transparency register |
|
Apple Inc. |
|
Asociación Española de Economía Digital |
|
2. Representatives of public authorities of third countries, including their diplomatic missions and embassies |
The list above is drawn up under the exclusive responsibility of the rapporteurs.
Where natural persons are identified in the list by their name, by their function or by both, the rapporteurs declare that they have submitted to the natural persons concerned the European Parliament's Data Protection Notice No 484 (https://www.europarl.europa.eu/data-protect/index.do), which sets out the conditions applicable to the processing of their personal data and the rights linked to that processing.
Ms Anna Cavazzini
Chair
Committee on the Internal Market and Consumer Protection
BRUSSELS
Mr Javier Zarzalejos
Chair
Committee on Civil Liberties, Justice and Home Affairs
BRUSSELS
Subject: Opinion on Conclusion, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (COM(2025)0265 - C10-0183/2025 - 2025/0136(NLE))
Dear Mr Zarzalejos,
Dear Ms Cavazzini,
Under the procedure referred to above, the Committee on Culture and Education has been asked to submit an opinion to your committees. At its meeting of 24 September 2025, the committee decided to send the opinion in the form of a letter. It considered the matter at its meeting of 2 December 2025 and adopted the opinion at that meeting[2].
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law establishes the first legally binding international framework ensuring that artificial intelligence (AI) systems are designed, developed, deployed and governed in accordance with human rights, democratic principles and the rule of law.
The Convention is a horizontal human rights and governance instrument applying across all sectors and activities within the lifecycle of AI systems. It sets overarching obligations on legality, transparency, accountability, non-discrimination, participation, access to remedies and independent oversight. These provisions are essential to maintaining an environment in which freedom of expression, media pluralism, artistic freedom and the diversity of opinions can flourish in the digital age.
In this wider context, the Convention provides a coherent horizontal framework for addressing the ethical, legal and societal implications of AI across sectors, in particular within the cultural and creative sectors and industries (CCSI) as well as education. Its legal safeguards ensure that AI deployment takes place within a framework that protects existing rights whilst promoting s innovation.
It is also particularly relevant in light of earlier reflections on generative AI, (GenAI), which underlined both its potential and its risks for the CCSI.
Legality, accountability and responsibility
By embedding legality, accountability and responsibility throughout the design and use of AI systems, the Convention provides a coherent framework for lawful, responsible and rights-based AI governance. These general principles could help create conditions in which cultural and creative works, educational content and journalistic expression could be better protected and valued within a trustworthy digital environment.
While GenAI offers significant opportunities for innovation, it may also reduce the incentive for original creation, threaten plurality and the livelihoods of creators, and lead to the unauthorised use of protected works without transparency or remuneration. The Convention's principles of legality and accountability throughout the development and use of AI systems are therefore particularly relevant to ensuring that AI operates within clear rights-based boundaries: creators must retain full control over what is done with their cultural and creative works, and therefore their right to authorise or prohibit their use, particularly in the context of data training by GenAI systems. These provisions thus reinforce the current legal framework that ensures fairness, accountability and the effective protection of rights holders.
Equality and non-discrimination
The Convention's guarantees of equality and non-discrimination promote fairness and inclusion in the design and use of AI. By requiring parties to prevent discriminatory or biased outcomes, the Convention may help ensuring that algorithmic processes are not detrimental to cultural and linguistic diversity.
Transparency, oversight, and remedies
The Convention establishes mechanisms for transparency, oversight and access throughout the AI lifecycle. These horizontal safeguards are essential for ensuring that AI systems remain open to scrutiny and that individuals and organisations have effective means of redress. The Convention's emphasis on transparency, oversight and access to remedies further reinforces a responsible use of AI, which is essential to safeguard creators' rights. It also contributes to strengthen openness and traceability: so the use of cultural and creative works within AI systems must be lawful, identifiable and subject to appropriate oversight and accountability mechanisms. Addressing the lack of transparency, discoverability of original works and traceability of AI-generated content is crucial to preserve Europe's multilingual and multicultural richness and to sustain confidence in human creativity in the age of AI.
Participation, awareness and education
Finally, the Convention highlights the importance of participation and of promoting knowledge, awareness and trust throughout the AI lifecycle. These principles are consistent with the broader commitment to advance AI literacy and media literacy across education and cultural programmes, including through synergies with the current Creative Europe programme and the future AgoraEU programme. Such initiatives are essential to ensure that citizens, educators and creators can engage critically and responsibly with AI, thereby strengthening creativity, democratic participation and informed citizenship.
The Convention is therefore particularly relevant for the cultural and creative sectors, where the rapid deployment of generative AI (GenAI) has illustrated both the opportunities and the risks of AI. Its horizontal principles help establish the conditions under which innovation can flourish while respecting fundamental rights and creative autonomy. By joining this Convention, the European Union reinforces its global leadership in promoting trustworthy and human-centred AI. It complements the Artificial Intelligence Act (AIA) and provides the external dimension of the Union's AI policy, ensuring coherence between internal legislation and international commitments and by fostering global partnerships and coalitions for responsible AI development.
For these reasons, the Committee on Culture and Education supports the conclusion of the Convention by the European Union.
Yours sincerely,
Nela Riehl
|
Title |
Conclusion, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law |
|||
|
References |
11361/2025 - C10-0183/2025 - 2025/0136(NLE) |
|||
|
Date of consultation or request for consent |
29.7.2025 |
|||
|
Committee(s) responsible Date announced in plenary |
IMCO 6.10.2025 |
LIBE 6.10.2025 |
||
|
Committees asked for opinions Date announced in plenary |
BUDG 6.10.2025 |
ITRE 6.10.2025 |
CULT 6.10.2025 |
JURI 6.10.2025 |
|
Not delivering opinions Date of decision |
BUDG 30.6.2025 |
ITRE 23.9.2025 |
JURI 15.7.2025 |
|
|
Rapporteurs Date appointed |
José Cepeda 12.9.2025 |
Paulo Cunha 12.9.2025 |
||
|
Rule 59 - Joint committee procedure Date announced in plenary |
6.10.2025 |
|||
|
Date adopted |
27.1.2026 |
|||
|
Result of final vote |
+: -: 0: |
77 19 8 |
||
|
Date tabled |
2.2.2026 |
|||
|
77 |
+ |
|
PPE |
Magdalena Adamowicz, Peter Agius, Pablo Arias Echeverría, Caterina Chinnici, Paulo Cunha, Henrik Dahl, Dóra Dávid, Regina Doherty, Christian Doleschal, Lena Düpont, Kamila Gasiuk-Pihowicz, Christophe Gomart, Sérgio Humberto, Arba Kokalari, Jeroen Lenaers, Lukas Mandl, Liudas Mažylis, Verena Mertens, Nadine Morano, Andreas Schwab, Tomislav Sokol, Tomas Tobé, Pekka Toveri, Dimitris Tsiodras, Inese Vaidere, Adina Vălean, Axel Voss, Marion Walsmann, Isabel Wiseler-Lima, Javier Zarzalejos, Tomáš Zdechovský |
|
Renew |
Abir Al-Sahlani, Malik Azmani, Jeannette Baljeu, Veronika Cifrová Ostrihoňová, Raquel García Hermida-Van Der Walle, Sandro Gozi, Svenja Hahn, Anna-Maja Henriksson, Fabienne Keller, Michael McNamara, Nikola Minchev, Cynthia Ní Mhurchú, Sophie Wilmès |
|
S&D |
Alex Agius Saliba, Marc Angel, Biljana Borzan, Adnan Dibrani, Bruno Gonçalves, Maria Grapini, Elisabeth Grossmann, Maria Guzenina, Pierre Jouvet, Marina Kaljurand, Pierfrancesco Maran, Matjaž Nemec, Maria Noichl, Aodhán Ó Ríordáin, Nacho Sánchez Amor, Christel Schaldemose, Birgit Sippel, Krzysztof Śmiszek, Cecilia Strada, Alessandro Zan |
|
The Left |
Giuseppe Antoci, Konstantinos Arvanitis, Damien Carême, Leila Chaibi, Özlem Demirel, Gaetano Pedulla' |
|
Verts/ALE |
Jaume Asens Llodrà, Mélissa Camara, Anna Cavazzini, Alice Kuhnke, Katrin Langensiepen, Tineke Strik, Kim Van Sparrentak |
|
19 |
- |
|
ECR |
Charlie Weimers |
|
ESN |
Petr Bystron, Alexander Jungbluth, Mary Khan, Milan Uhrík, Petar Volgin |
|
NI |
Erik Kaliňák |
|
PPE |
Branko Grims |
|
PfE |
Tomasz Buczek, Elisabeth Dieringer, Marieke Ehlers, Jorge Martín Frías, Petra Steger, Pál Szekeres, Hermann Tertsch, Matthieu Valet, Tom Vandendriessche, Roberto Vannacci, Alexandre Varaut |
|
8 |
0 |
|
ECR |
Stefano Cavedagna, Alessandro Ciriani, Piotr Müller, Denis Nesci, Gheorghe Piperea, Reinis Pozņaks, Ivaylo Valchev |
|
PfE |
Jaroslav Bžoch |
Key to symbols:
+ : in favour
- : against
0 : abstention