11/05/2025 | Press release | Distributed by Public on 11/05/2025 06:14
Today, the Commission launched work on a code of practice on the marking and labelling of AI-generated content.
GettyImages © berya113
Under the AI Act, content such as deepfakes and certain AI-generated text and other synthetic material must be clearly marked as such. This requirement reflects the growing difficulty in distinguishing AI-generated content from authentic, human-produced material.
The AI Act sets out transparency requirements for providers and deployers of certain AI systems, including generative and interactive AI. These rules aim to reduce the risk of misinformation, fraud, impersonation, and consumer deception by fostering trust in the information ecosystem.
Today's kick-off plenary meeting, bringing together the independent experts appointed by the European AI Office, marks the beginning of an inclusive, seven-month, stakeholder-driven process to draft the code. Independent experts will lead the process, using input from the public consultation and stakeholders selected through an open call.
The upcoming code of practice on transparency of AI-generated content will be a voluntary instrument to help providers of generative AI systems effectively meet their transparency obligations. It will support the marking of AI-generated content, including synthetic audio, images, video and text, in machine-readable formats to enable detection. The Code will also assist deployers using deepfakes or AI-generated content in clearly disclosing AI involvement, particularly when informing the public on matters of public interest.
These obligations will become applicable in August 2026, complementing existing rules such as high-risk AI systems or general-purpose AI models.
Read more information on the Code of Practice on transparency of AI-generated content.