EU AI Act: Draft guidance for general AIs shows the first steps for Big AI to follow
9 mins read

EU AI Act: Draft guidance for general AIs shows the first steps for Big AI to follow

A first draft of a code of conduct that will apply to providers of general purpose AI models under the EU’s AI law has been published, along with an invitation for feedback – open until November 28 – as the drafting process continues into next year, before the formal compliance deadlines coming in over the next few years.

The EU-wide law, which entered into force this summerregulates applications of artificial intelligence under a risk-based framework. But that too directs some actions towards more powerful basic – or general – AI models (GPAI). This is where this code of conduct comes in.

Among those likely to be in the frame is OpenAI, maker of GPT modelswhich supports the AI ​​chatbot ChatGPTGoogle with its Gemini GPAIMeasure with LamaAnthropic with Claudeand others like France’s Mistral. They will be expected to adhere to the AI ​​General Code of Conduct if they want to ensure compliance with the AI ​​Act and thus avoid the risk of enforcement for non-compliance.

To be clear, the Code is intended to provide guidance for meeting the EU’s AI Act obligations. GPAI providers may choose to deviate from the best practice suggestions if they believe they can demonstrate compliance through other measures.

This first draft of the code runs to 36 pages but is likely to be longer — perhaps significantly so — as the authors caution that it is light on details because it is “a high-level, drafting plan that outlines our guiding principles and goals for the code. “

The draft is littered with boxes that pose “open questions” that the working groups tasked with producing the code have not yet resolved. The solicited feedback – from industry and civil society – will clearly play a key role in shaping the content of specific sub-measures and key performance indicators (KPIs) that have not yet been included.

However, the document gives a sense of what is coming (in terms of expectations) for GPAI manufacturers, when the relevant compliance deadlines apply.

Transparency requirements for manufacturers of GPAIs enter into force on 1 August 2025.

However, for the most powerful GPAIs – those defined by law as having “systemic risk” – the expectation is that they must comply with risk assessment and mitigation requirements 36 months after enactment (or August 1, 2027).

There is an additional caveat in that the draft code has been designed under the assumption that there will be only “a small number” of GPAI manufacturers and systemic risk GPAIs. “Should that assumption prove incorrect, future drafts may need to be significantly modified, for example by introducing a more detailed tier system of measures aimed at primarily focusing on the models that pose the greatest systemic risks,” the authors warn.

On the transparency front, the code will outline how GPAIs must comply with information regulations, including in the area of ​​copyrighted material.

An example here is “Sub-Measure 5.2,” which currently commits signatories to providing information about the name of all crawlers used to develop the GPAI and their relevant robots.txt functions “including at the time of crawling.”

GPAI model makers continue to face questions about how they acquired the data to train their models, among others trials filed by rights holders who claim that AI companies have unlawfully processed copyrighted information.

Another commitment set out in the draft code requires GPAI providers to have a single point of contact and complaint handling to make it easier for rights holders to communicate complaints “directly and quickly.”

Other proposed measures related to copyright cover documentation that GPAIs will be expected to provide about the data sources used for “training, testing and validation and about authorizations to access and use protected content for the development of a general purpose AI.”

System risk

The most powerful GPAIs are also subject to rules in the EU AI Act aimed at mitigating so-called “systemic risks”. These AI systems are currently defined as models that have been trained on a total computing power of more than 10^25 FLOPs.

The Code contains a list of risk types that signatories are expected to treat as systemic risks. They include:

  • Offensive cybersecurity risks (such as vulnerability discovery).
  • Chemical, biological, radiological and nuclear risk.
  • “Loss of control” (here meaning the inability to control a “powerful autonomous general purpose AI) and automated use of models for AI R&D.
  • Persuasion and manipulation, including large-scale misinformation/disinformation that may pose risks to democratic processes or lead to a loss of trust in the media.
  • Large-scale discrimination.

This version of the code also suggests that GPAI manufacturers can identify other types of system risks not explicitly listed as well – such as “large-scale” privacy breaches and surveillance, or uses that could pose risks to public health. And one of the open questions the document poses here asks which risks should be prioritized for addition to the main taxonomy. Another is how the system risk classification should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate images).

The Code also aims to provide guidance on identifying key attributes that could lead to models creating systemic risks, such as “dangerous model capability” (eg cyber offensive or “weapons acquisition or proliferation capability”) and “dangerous model propensity” (eg incorrect adapted to human intentions and/or values;

Although much detail still remains to be filled in, as the drafting process continues, the authors of the code write that its measures, sub-measures and key figures should be “proportionate” with a particular focus on “tailored to the size and capabilities of a specific provider, particularly SMEs and start-ups with less financial resources than those at the frontier of AI development.” Attention should also be paid to “different deployment strategies (eg open-sourcing), where appropriate, that reflect the principle of proportionality and take into account both benefits and risks,” they add.

Many of the open questions raised by the draft relate to how specific measures should be applied to open source models.

Safety and security in the frame

Another measure in the code concerns a “Safety and Security Framework” (SSF). GPAI manufacturers are expected to detail their risk management policies and “continuously and thoroughly” identify systemic risks that may arise from their GPAI.

Here is an interesting sub-measure on “Predict Risks.” This would commit signatories to include in their SSF “best effort estimates” of timelines for when they expect to develop a model that triggers systemic risk indicators – such as the aforementioned dangerous model capabilities and propensities. That could mean starting in 2027, we’ll see cutting-edge AI developers lay out timeframes for when they expect model development to pass certain risk thresholds.

Elsewhere, the draft code focuses on systemic risk GPAIs by using “best assessments” of their models’ capabilities and limitations and applying “a range of appropriate methods” to do so. Listed examples include: Question and answer sets, benchmarks, red-teaming and other methods of adversarial testing, studies of human uplift, model organisms, simulations and proxy evaluations of classified materials.

Another sub-measure on “notification of significant systemic risks” would oblige signatories to notify this AI Officea supervisory and governing body established under the Act, “if they have strong reasons to believe that significant systemic risks may materialize.”

The code also sets out measures for “serious incident reporting.”

“Signatories undertake to identify and track serious incidents, to the extent they arise from their general AI models of systemic risk, document and report, without undue delay, all relevant information and possible corrective actions to the AI ​​Office and, as appropriate , to national competent authorities,” it says—although an accompanying open-ended question asks for input on “what constitutes a serious incident.” So there seems to be more work to be done here in nailing down definitions.

The draft code includes additional questions about “possible corrective actions” that can be taken in response to serious incidents. It also asks “what serious incident response processes are appropriate for open weight or open source vendors?”, among other wording seeking feedback.

“This first draft of the code is the result of a preliminary review of existing best practice by the four specialist working groups, stakeholder consultation from almost 430 submissions, supplier workshop responses, international approaches (including the G7 Code of Conduct, Frontier AI Safety Commitments, Bletchley -the declaration and outputs from relevant authorities and standards bodies), and, most importantly, the AI ​​Act itself,” the authors go on to say in conclusion.

“We emphasize that this is only a first draft and accordingly the proposals in the draft code are preliminary and subject to change,” they add. “Therefore, we invite your constructive input as we further develop and update the content of the code and work towards a more detailed final form for May 1, 2025.”