This weekend, the EU’s General-Purpose AI (GPAI) Code of Practice will be assessed by the AI Board and AI Office to see if it meets the requirements of the AI Act. A voluntary framework developed with the help of nearly 1,000 stakeholders, including Partnership on AI (PAI), model developers, AI safety experts, academics, representatives from EU Member States, and civil society organizations, it sets out measures that developers and providers of general purpose AI can use to demonstrate compliance with the EU’s AI Act and protect users of these systems from potential harms and risks. As AI evolves, it is important to us that we foster the development and deployment of systems that contribute to a more just, equitable, and prosperous world.
The GPAI covered by the Code includes powerful large language models like ChatGPT, Claude, and Llama, as well as other foundation models that can be adapted to a range of tasks. Compliance with the Code will require providers of all GPAI models to provide documentation about their models to the AI Office and to downstream developers, and will require providers of the most powerful GPAI to take steps to ensure their models are safe. This includes conducting evaluations, assessing and mitigating risks, reporting incidents, and ensuring adequate cybersecurity measures are in place. The Code has three sections, addressing Transparency, Copyright (both addressing all GPAI models), and Safety and Security (addressing GPAI models with systemic risk).
“As governments across the world work towards developing comprehensive AI governance strategies, it is important for frameworks like the Code of Practice to pave the way for clear guidance and fostering responsible innovation.”
Since the drafting process began last September, PAI has contributed significantly to the development of the Code, joining plenary sessions and contributing to all four working groups. We provided written feedback on multiple iterations through the drafting process. Our submissions addressed the Transparency and Safety and Security Sections of the Code, drawing on our published work on those topics. With most of our recommendations reflected in the finalized Code, we applaud the degree to which stakeholder feedback has been incorporated at each phase of the drafting process, improving the ability of the Code to promote safety, transparency, and compliance with the AI Act to better uphold and protect rights of EU citizens.
Transparency
The Code requires model developers to draw up model documentation and keep it up to date, and to provide relevant information to the AI Office, national AI regulators, and downstream providers. Regulators need this information to monitor compliance with the AI Act, and downstream providers need it to integrate GPAI models into their own systems and to comply with their own obligations.
PAI has undertaken extensive work on the importance of documentation for AI models and systems, including our ABOUT ML workstream, our Model Deployment Guidance, and our 2025 Progress Report on post-deployment documentation. We welcome the focus on documentation and the inclusion in the Code of a template Model Documentation Form.
There has been to date no consensus on either the form or content of documentation artifacts. Yet the benefits of documentation are greatest when it is comparable across models and systems, making it easier to judge relative performance, suitability, or impact of models.
Standardization of model documentation is also a key foundation for interoperability between legal and policy frameworks for foundation models, which we discuss at length in our report on the topic. The Model Documentation Form has the potential to promote policy interoperability, and PAI urges the Code’s drafters to consider promoting harmonization with evolving international best practices in future iterations of the Code.
The Code requires disclosure of information to EU regulators and downstream providers. It also encourages signatories to consider what information can be publicly disclosed. PAI would like to see additional guidance about what information about models should be publicly released in subsequent versions of the Code. Increased transparency will promote independent evaluations of models and ultimately increase the safety of, and public trust in, deployed AI models.
“. . . our recommendations reflected in the finalized Code . . . [improve] the ability of the Code to promote safety, transparency, and compliance with the AI Act to better uphold and protect rights of EU citizens.”
Safety and Security for GPAI models with systemic risks
PAI has undertaken significant work on foundation model safety. In particular, our Model Deployment Guidance contains detailed safety guidance for developers that is tailored to model capabilities and release strategy.
PAI is pleased to see that feedback on previous versions of the Code was taken on board by the drafters, and the Code now addresses a wider variety of systemic risks, including risks to fundamental rights, consistent with the AI Act.
PAI also welcomes the requirement for external evaluations of some GPAI models. Independent assessment of model capabilities and risks is crucial to ensure that evaluators have the broad range of expertise needed to do their job. It is also needed to build wider trust that evaluation outcomes are objective.
Independent evaluations are a critical plank of a vibrant AI assurance ecosystem. PAI launched a policy research project at the AI Action Summit in France earlier this year to address the core factors needed to build out an assurance ecosystem to create justified trust in AI models and systems. In future iterations of the Code, we would like to see more detailed guidance about external evaluations both pre- and post-deployment, including robust safe harbor provisions for evaluators.
As with the transparency section of the Code, consideration should be given in future iterations to expanding the guidance relating to the release of summaries of Safety and Security Frameworks and Model Reports, to include more detail about when those summaries should be released and their content.
We also welcome the inclusion in the code of provisions for post-market monitoring and incident reporting. Sharing relevant information about a model’s impact after deployment is crucial to understanding how to amplify societal benefits, manage and mitigate risks, develop evidence-based proportionate policy, and advance industry-wide norms.
Looking forward
While the Code offers a strong foundation, some areas could benefit from further development. This includes greater guidance about identifying systemic risks and more detail about external evaluations. We urge the Commission to keep these matters under review and commit to regular and ongoing updates to the Code to ensure it reflects evolving best practices.
Regular review of the Code will be necessary to accommodate rapidly evolving best practices, as well as increasing model capabilities and the emergence of novel risks. In future reviews of the Code, we hope to see a number of issues addressed:
Ongoing research: Research on evaluations, metrics, and benchmarks for capabilities and risks is still ongoing. Similarly, best practices for post-market monitoring are continuing to develop. As our understanding of GPAI and best practices progresses, it should be reflected by the inclusion of more detailed guidance in the Code.
Updates to the threshold for GPAI models with systemic risks: The current compute-based threshold is widely acknowledged to be an imprecise proxy for risk. Methods to identify which models require closer scrutiny are likely to evolve and the Code should be updated to reflect this. As well as being a core part of the risk management framework in the Code, thresholds are a foundational plank of policy interoperability across jurisdictions. Future iterations of the Code should seek to harmonize the threshold for models with systemic risk with thresholds for frontier models in other national and international frameworks as closely as possible, within the constraints of the definitions and in-scope risks set out in the AI Act.
Address in more detail the public disclosure of standardized model documentation and summaries of Safety and Security Frameworks and Model Reports.
The endorsement of the GPAI Code of Practice will mark a significant step forward in global AI governance. As governments across the world work towards developing comprehensive AI governance strategies, it is important for frameworks like the Code of Practice to pave the way for clear guidance and fostering responsible innovation. The Code will provide developers and model providers a structured approach to building systems that comply with the EU AI Act, ensuring that these systems are developed and deployed responsibly.
We are especially pleased to see the commitment to a multistakeholder process throughout the drafting process, ensuring that voices from collaborators across sectors are heard. While views differ about the precise terms of the final Code, the efforts made to respond to feedback at each phase of the drafting process have been impressive and have set a valuable precedent for collaborative AI governance. We are excited to see the Code welcomed into practice, and look forward to continuing our work in ensuring AI is developed and deployed responsibly for the benefit of all. To stay up to date with our work in this area, sign up for our newsletter.