Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nvidia Faces Scrutiny Over Security Concerns in China

Paper page – Phi-Ground Tech Report: Advancing Perception in GUI Grounding

OpenAI raises over $8 billion in latest funding round, reaching $300 billion valuation

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Partnership on AI

Shaping the EU AI Act’s Code of Practice

By Advanced AI EditorJuly 31, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This weekend, the EU’s General-Purpose AI (GPAI) Code of Practice will be assessed by the AI Board and AI Office to see if it meets the requirements of the AI Act. A voluntary framework developed with the help of nearly 1,000 stakeholders, including Partnership on AI (PAI), model developers, AI safety experts, academics, representatives from EU Member States, and civil society organizations, it sets out measures that developers and providers of general purpose AI can use to demonstrate compliance with the EU’s AI Act and protect users of these systems from potential harms and risks. As AI evolves, it is important to us that we foster the development and deployment of systems that contribute to a more just, equitable, and prosperous world.

The GPAI covered by the Code includes powerful large language models like ChatGPT, Claude, and Llama, as well as other foundation models that can be adapted to a range of tasks. Compliance with the Code will require providers of all GPAI models to provide documentation about their models to the AI Office and to downstream developers, and will require providers of the most powerful GPAI to take steps to ensure their models are safe. This includes conducting evaluations, assessing and mitigating risks, reporting incidents, and ensuring adequate cybersecurity measures are in place. The Code has three sections, addressing Transparency, Copyright (both addressing all GPAI models), and Safety and Security (addressing GPAI models with systemic risk).

“As governments across the world work towards developing comprehensive AI governance strategies, it is important for frameworks like the Code of Practice to pave the way for clear guidance and fostering responsible innovation.”

Since the drafting process began last September, PAI has contributed significantly to the development of the Code, joining plenary sessions and contributing to all four working groups. We provided written feedback on multiple iterations through the drafting process. Our submissions addressed the Transparency and Safety and Security Sections of the Code, drawing on our published work on those topics. With most of our recommendations reflected in the finalized Code, we applaud the degree to which stakeholder feedback has been incorporated at each phase of the drafting process, improving the ability of the Code to promote safety, transparency, and compliance with the AI Act to better uphold and protect rights of EU citizens.

Transparency

The Code requires model developers to draw up model documentation and keep it up to date, and to provide relevant information to the AI Office, national AI regulators, and downstream providers. Regulators need this information to monitor compliance with the AI Act, and downstream providers need it to integrate GPAI models into their own systems and to comply with their own obligations.

PAI has undertaken extensive work on the importance of documentation for AI models and systems, including our ABOUT ML workstream, our Model Deployment Guidance, and our 2025 Progress Report on post-deployment documentation. We welcome the focus on documentation and the inclusion in the Code of a template Model Documentation Form.

There has been to date no consensus on either the form or content of documentation artifacts. Yet the benefits of documentation are greatest when it is comparable across models and systems, making it easier to judge relative performance, suitability, or impact of models.

Standardization of model documentation is also a key foundation for interoperability between legal and policy frameworks for foundation models, which we discuss at length in our report on the topic. The Model Documentation Form has the potential to promote policy interoperability, and PAI urges the Code’s drafters to consider promoting harmonization with evolving international best practices in future iterations of the Code.

The Code requires disclosure of information to EU regulators and downstream providers. It also encourages signatories to consider what information can be publicly disclosed. PAI would like to see additional guidance about what information about models should be publicly released in subsequent versions of the Code. Increased transparency will promote independent evaluations of models and ultimately increase the safety of, and public trust in, deployed AI models.

“. . . our recommendations reflected in the finalized Code . . . [improve] the ability of the Code to promote safety, transparency, and compliance with the AI Act to better uphold and protect rights of EU citizens.”

Safety and Security for GPAI models with systemic risks

PAI has undertaken significant work on foundation model safety. In particular, our Model Deployment Guidance contains detailed safety guidance for developers that is tailored to model capabilities and release strategy.

PAI is pleased to see that feedback on previous versions of the Code was taken on board by the drafters, and the Code now addresses a wider variety of systemic risks, including risks to fundamental rights, consistent with the AI Act.

PAI also welcomes the requirement for external evaluations of some GPAI models. Independent assessment of model capabilities and risks is crucial to ensure that evaluators have the broad range of expertise needed to do their job. It is also needed to build wider trust that evaluation outcomes are objective.

Independent evaluations are a critical plank of a vibrant AI assurance ecosystem. PAI launched a policy research project at the AI Action Summit in France earlier this year to address the core factors needed to build out an assurance ecosystem to create justified trust in AI models and systems. In future iterations of the Code, we would like to see more detailed guidance about external evaluations both pre- and post-deployment, including robust safe harbor provisions for evaluators.

As with the transparency section of the Code, consideration should be given in future iterations to expanding the guidance relating to the release of summaries of Safety and Security Frameworks and Model Reports, to include more detail about when those summaries should be released and their content.

We also welcome the inclusion in the code of provisions for post-market monitoring and incident reporting. Sharing relevant information about a model’s impact after deployment is crucial to understanding how to amplify societal benefits, manage and mitigate risks, develop evidence-based proportionate policy, and advance industry-wide norms.

Looking forward

While the Code offers a strong foundation, some areas could benefit from further development. This includes greater guidance about identifying systemic risks and more detail about external evaluations. We urge the Commission to keep these matters under review and commit to regular and ongoing updates to the Code to ensure it reflects evolving best practices.

Regular review of the Code will be necessary to accommodate rapidly evolving best practices, as well as increasing model capabilities and the emergence of novel risks. In future reviews of the Code, we hope to see a number of issues addressed:

Ongoing research: Research on evaluations, metrics, and benchmarks for capabilities and risks is still ongoing. Similarly, best practices for post-market monitoring are continuing to develop. As our understanding of GPAI and best practices progresses, it should be reflected by the inclusion of more detailed guidance in the Code.
Updates to the threshold for GPAI models with systemic risks: The current compute-based threshold is widely acknowledged to be an imprecise proxy for risk. Methods to identify which models require closer scrutiny are likely to evolve and the Code should be updated to reflect this. As well as being a core part of the risk management framework in the Code, thresholds are a foundational plank of policy interoperability across jurisdictions. Future iterations of the Code should seek to harmonize the threshold for models with systemic risk with thresholds for frontier models in other national and international frameworks as closely as possible, within the constraints of the definitions and in-scope risks set out in the AI Act.
Address in more detail the public disclosure of standardized model documentation and summaries of Safety and Security Frameworks and Model Reports.

The endorsement of the GPAI Code of Practice will mark a significant step forward in global AI governance. As governments across the world work towards developing comprehensive AI governance strategies, it is important for frameworks like the Code of Practice to pave the way for clear guidance and fostering responsible innovation. The Code will provide developers and model providers a structured approach to building systems that comply with the EU AI Act, ensuring that these systems are developed and deployed responsibly.

We are especially pleased to see the commitment to a multistakeholder process throughout the drafting process, ensuring that voices from collaborators across sectors are heard. While views differ about the precise terms of the final Code, the efforts made to respond to feedback at each phase of the drafting process have been impressive and have set a valuable precedent for collaborative AI governance. We are excited to see the Code welcomed into practice, and look forward to continuing our work in ensuring AI is developed and deployed responsibly for the benefit of all. To stay up to date with our work in this area, sign up for our newsletter.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTPC25 Maps the Road to Science-Ready AI
Next Article Design and development shop the Iconfactory is selling some apps — and AI is partially to blame
Advanced AI Editor
  • Website

Related Posts

Understanding the US AI Action Plan: Gaps and Opportunities

July 28, 2025

AI 101: What is AI, Anyway? And Other Questions You’ve Been Too Shy to Ask

July 3, 2025

Dad 2.0: Five Ways AI Can Upgrade Fatherhood

June 12, 2025

Comments are closed.

Latest Posts

Theatre Director and Artist Dies at 83

France to Accelerate Return of Looted Artworks—and More Art News

Person Dies After Jumping from Whitney Museum

At Aspen Art Week, Bigger Fairs Make for a High-Altitude Market Bet

Latest Posts

Nvidia Faces Scrutiny Over Security Concerns in China

August 1, 2025

Paper page – Phi-Ground Tech Report: Advancing Perception in GUI Grounding

August 1, 2025

OpenAI raises over $8 billion in latest funding round, reaching $300 billion valuation

August 1, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Nvidia Faces Scrutiny Over Security Concerns in China
  • Paper page – Phi-Ground Tech Report: Advancing Perception in GUI Grounding
  • OpenAI raises over $8 billion in latest funding round, reaching $300 billion valuation
  • Security updates: IBM Db2 can be attacked in various ways
  • MIT Develops Device to Address Type 1 Diabetes Complication

Recent Comments

  1. pbnDruch on How Cursor and Claude Are Developing AI Coding Tools Together
  2. lusakFrego on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Anonymous on Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’
  4. Michaeltap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. mowihfed on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.