CSIS - Center for Strategic and International Studies Inc.

01/24/2025 | Press release | Distributed by Public on 01/24/2025 11:39

Talks on the EU AI Act Code of Practice at a Crucial Phase

Talks on the EU AI Act Code of Practice at a Crucial Phase

Photo: fotograzia via Getty Images

Commentary by Laura Caroli

Published January 24, 2025

Many outside Europe have stopped paying attention to the EU AI Act, deeming it a done deal. This is a terrible mistake. The real fight is happening right now, with the drafting of the proposed act to detail the rules for general-purpose AI (GPAI) models, including those with systemic risk. The regulation indeed only sets out very high-level provisions in this regard, delegating the details to this code, a voluntary document containing concrete commitments to comply with the requirements set out in the law. Participating stakeholders were asked to submit comments to the second draft of the code by January 15. The AI Act set a deadline for the code to be finalized at the end of April 2025, to enable providers to prepare for compliance by August 2025. This allows for one more round of drafting and comments before the text is closed. The Code of Practice proposed by the act details the rules for general-purpose AI (GPAI) models, including those with systemic risk. The regulation indeed only sets out very high-level provisions in this regard, delegating the details to this code, a voluntary document containing concrete commitments to comply with the requirements set out in the law. Participating stakeholders were asked to submit comments to the second draft of the code by January 15. The AI Act set a deadline for the code to be finalized at the end of April 2025, to enable providers to prepare for compliance by August 2025. This allows for one more round of drafting and comments before the text is closed.

The first draft was a raw, first attempt at setting the operational steps in the still nascent field of AI safety for frontier models and was still filled with open questions to engage the various stakeholders participating in the process. The second version is twice as long and much more detailed. It includes specific key performance indicators for each measure, provides much clearer explanations of terms and concepts, and attempts to solve some of the tensions that were becoming clear among the participating stakeholders.

The Factions at Play

Who are the factions at play in the process? Participating civil society organizations can be roughly divided into three groups:

  • Effective altruists or long-termists, advocating for strong AI safety measures for frontier AI models and cautioning about AI's existential risks.
  • Traditional civil society organizations, such as consumer or human rights-focused organizations, which are more concerned with privacy and respect for fundamental rights in the shorter term.
  • Rightsholder organizations, whose explicit focus is only to make sure that provisions on transparency about copyrighted content are as strong and effective as possible, so as to enable rightsholders to enforce their rights.

In contrast to civil society, industry experts (representing roughly 20 percent of the total of participants) can be divided into:

  • Model providers with systemic risk
  • Other model providers
  • Model deployers or downstream providers (typically small and medium enterprises)
  • The open-source community (which partially overlaps with the above categories)

As expected, civil society organizations are actively trying to make obligations stricter and more detailed, while industry is pushing back against them, deeming them unfeasible or even going beyond the scope of the AI Act that the code is supposed to help implement.

Main Differences Between the Two Drafts

The second draft maintains the overall structure of the first, to reflect the requirements of the AI Act, with the main exception that the Safety and Security Report introduced in the first draft is now put together with the Safety and Security Framework (SSF), to underscore its utility in keeping track of internal compliance and in providing transparency to the AI Office, the main enforcer of the GPAI rules. Another notable difference is that the section on taxonomy is now integrated into the commitments of providers of GPAI models with systemic risk and further broken down into its main components than the previous draft.

Also, while safety and security mitigations were under the same measure before, they have now been separated into two distinct parts. A very interesting addition to the security one is a direct reference to existing cybersecurity standards and best practices since that field is much more developed and mature than that of AI safety. Even more interesting is the fact that a European policy document such as the Code of Practice explicitly references standards from the National Institute for Standards and Technology, a U.S. federal agency. This is a testament to the attention of the drafters to international efforts and interoperability.

An Attempt to Solve Tensions Between Stakeholders: Softer on Industry . . .

The attempt to solve the tensions is clear in several parts.

First of all, the attention towards looking less prescriptive and more mindful of the voluntary nature of the code is made much more evident by substituting the expression "signatories will," prevalent in introducing most of the measures in the first draft, with "signatories commit to."

At the same time, the draft specifies in many instances that the proposed measures entail a certain level of flexibility. For example, in the risk identification section, "providers have full flexibility in adapting to changes in best practices as those become available."

Since the field of technical risk mitigation is still developing, in the corresponding section the draft clarifies that the commitments are "outcome-based," and that mitigations merely need to be proportionate to the risks posed by the model.

In the commitment to serious incident reporting, the draft adds that "a Signatory reporting an incident to the AI Office does not constitute an admission of wrongdoing," to assuage providers' concerns about possible liability (and reputational) implications.

But probably the most notable pro-industry change is the one on external evaluators. Here, a decision tree helps clarify in what instances-only residual-providers commit to using external assessors in the pre-deployment phase:

The text further makes it very explicit that a pre-market external risk assessment of a model does not amount to a "de facto preapproval" regime, as providers were fearing. Several industry representatives had voiced concerns that such mandatory third-party assessment would be too burdensome, unfeasible, and going beyond the scope of the AI Act. Long-termist organizations, however, still believe it should be mandatory, given the nature of the systemic risk model providers need to manage, and even CEO Dario Amodei agrees.

. . . And More Prescriptive at the Same Time

Despite all these inclusions and explanations for providers, the second draft of the code also contains many more complexities for them. Compared to the first draft, the second goes into much more detail on the measures they should adopt, even specifying tight time frames for certain actions. For example, providers should reassess the adherence to and adequacy of their Safety and Security Framework every six months. By contrast, the first draft stated this should occur "annually," while the AI Act does not mention a specific time frame.

In the model elicitation measure, the draft mandates that elicitations minimize the risk of strategic underperformance of the model, specifying

"This requires providing evaluators with adequate compute budgets . . . as well as appropriate staffing and engineering budgets . . . For the most severe risks identified, Signatories commit to roughly matching the elicitation efforts spent on their leading non-safety research projects."

This may sound like a very prescriptive measure. At the same time, Sam Altman himself wrote in a blog two years ago:

"Importantly, we think we often have to make progress on AI safety and capabilities together. It's a false dichotomy to talk about them separately."

Therefore, such commitment should be obvious for responsible companies.

When it comes to sharing tools and best practices, it is suggested that signatories

"will consider hiring or assigning additional engineering and support staff to research teams . . . while keeping the workload of existing safety researchers the same."

This looks like a laudable but very intrusive requirement on staffing that seems at best only feasible for large companies.

Incorporating More of Civil Society's Concerns

When it comes to further elements that seem to go more towards civil society, in particular organizations dealing with fundamental rights and the ones representing rightsholders interested in copyright enforcement, the transparency section saw a major expansion. The AI Act mandates that all GPAI model providers produce detailed technical documentation to the authorities and a different subset of information to downstream providers. This requirement should help authorities (mainly the EU AI Office) verify compliance with the rules, test the models, and request measures in case of problems. For downstream providers, the information helps integrate the models into their AI systems. The regulation already contains a detailed list of what the documentation should include. The first draft of the code slightly elaborates on it, but the second draft significantly expands one section in particular: the information on datasets to be submitted to the authorities. Comparing the different levels of detail in the three texts is quite striking:

Image
Senior Fellow, Wadhwani AI Center
Remote Visualization

To be fair, most of the added elements in the second draft are mere elaborations of concepts that were already present in the text of the act. However, the draft now also includes, among others, data about "[organizations] that manage humans to create, pre-process, [and] annotate data" and the numbers thereof, as well as how the provider acquired the rights to the data. The latter is a clear nudge towards rightsholders, forcing providers to disclose whether or not they obtained datasets containing copyrighted content in a lawful way.

Even more striking is the addition of a description of methods to address "the prevalence of child sexual abuse material (CSAM) or non-consensual intimate imagery," copyrighted materials, and personal data in the datasets. The fact that these are the singled-out categories to keep in check (and not, for example, the prevalence of violent imagery or terrorist content) says a lot about the sensitivities of the stakeholders and the drafters.

Some Recommendations as the Drafting Reaches Its Third and Crucial Round

Add one missing element and reformulate one potentially problematic proposal. The AI Act mandates all GPAI model providers to publicly disclose a "sufficiently detailed summary about the content used for training of the general-purpose AI model," through a template that the EU AI Office will provide. The regulation specifies that the code should also cover the "adequate level of detail for the summary about the content used for training." Given the level of attention by both rightsholders and industry about the copyright implications of GPAI model training, this will be a crucial point to make sure the code has a true added value. Yet, it seems to be missing from the draft. The EU AI Office has started outlining what the future template will look like, but the balance on the level of detail should be found by the participants to the code.

By contrast, an element that should not be in the code can be found in measure 10.5, on "models as part of systems." The second draft asks providers to ensure the downstream provider evaluates the model with the same standard of rigor as the upstream one. The regulation only foresees an information-sharing role along the value chain for downstream providers, as is already common practice. This new measure therefore seems like a de facto new obligation for downstream providers. A particularly problematic addition, if not further clarified, goes beyond the letter and the intentions of the AI Act.

Continue streamlining and clarifying operational steps together with industry. The drafters have clearly made an effort to provide explanations as to what certain measures mean and why they are there, but more still needs to be done.

For example, internal timelines can be helpful, but they are not explicitly foreseen in the act. Therefore, not only should they be streamlined it should also be clear that they are meant to help industry adapt more easily to a common time frame, not requirements that can trigger fines.

Also, the proposed potential outline of an SSF could be further detailed and turned into a checklist, similar to existing risk management standards, containing all the key operational steps that are described in the measures.

Finally, it could be useful to clarify the exact documentation a provider needs to prepare. The second draft still contains duplications that can be easily corrected and streamlined.

A Final Word of Caution to the Drafters

The co-regulatory effort of the Code of Practice provides the unique opportunity to draft the rules together with industry, to make sure the commitments work for all operators.

The process has now reached a crucial phase. Providers should keep engaging constructively, highlighting what works and what doesn't, and taking responsibility for their power.

At the same time, excluding them, not taking their concerns into consideration, or imposing their obligations on them without thinking about their feasibility and alignment with the law is equally dangerous and risks undermining the overall process.

The upcoming third round of drafting will show whether the process will ultimately succeed or fail. With the recent political developments in the United States, housing must also be taken into consideration. The turbulent start of the Trump administration and its initial action on AI, the latest, strict AI diffusion rule put forward by the exiting administration, as well as the new attitude showed by big tech companies towards the new president in an anti-European stance, do not warrant a smooth conclusion of the process.

Europe must absolutely not renounce setting out its own rules when models are available in its single market. Nevertheless, drafters must keep in mind the fragility of the current setting and the clear disproportion in the participants to the platform, whereby civil society and independent experts substantially outnumber industry. If the overall process happens "against" industry instead of being co-led, owned, and shepherded by it, the code will have no legitimacy and will remain a void and purposeless exercise. It must also be recalled, indeed, that the AI Act does not mandate companies to adopt the code as a means of compliance, but it remains voluntary, whereas they still have the possibility to demonstrate compliance through other means.

At the same time, the act foresees the possibility for the AI Office to set the rules itself, if a code cannot be finalized by August 2, 2025, when the rules for GPAI models will start applying. These two elements should be enough to show how fragile the balance is: This process can either be an enormous success of participative, co-regulatory rule-setting, that could set an example for other fields, or it could fail, dealing a major blow to the overall credibility of the AI Act and, ultimately, of the European Union itself. The participants should keep the initial inspiration of the code, which lies in the standardization process, even though under accelerated and exceptional conditions. It should remain a shared process, where consensus is actively sought out as a necessary component to the legitimacy of the rules, where industry is a responsible protagonist and not a reluctant rule-taker. This is particularly important considering that the rules on GPAI models will be the first to apply (except for prohibitions and provisions on AI literacy, kicking in in February 2025). This means they will set the tone for the overall application of the European AI rules. Guaranteeing the safety of frontier AI is a shared goal by all key global leaders. A successful code is likely to set the standard for future AI safety rules worldwide. All the more reason to get it right.

Laura Caroli is the senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies in Washington, D.C.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2025 by the Center for Strategic and International Studies. All rights reserved.

Tags

Europe, and Technology