Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Trump’s Tech Sanctions To Empower China, Betray America

Paper page – MARBLE: Material Recomposition and Blending in CLIP-Space

Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
Center for AI Safety

The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning

Advanced AI BotBy Advanced AI BotApril 3, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons.1 In collaboration with a consortium of experts, we release the Weapons of Mass Destruction Proxy (WMDP) benchmark, an extensive dataset of questions that serve as a proxy measurement of hazardous knowledge in biology, chemistry, and cybersecurity. Using this benchmark, we develop ‘CUT’—a state-of-the-art unlearning method which removes hazardous knowledge, while retaining general model capabilities. See the website here, and read the full paper here.

The WMDP benchmark is a dataset of 4,157 multiple-choice questions that serve as a proxy measure of hazardous knowledge in biosecurity, cybersecurity, and chemical security.

Assessing Hazardous Knowledge in LLMs

Until now, governments, industry, and the research community have lacked a high quality dataset to assess hazardous cyber, biological, and chemistry knowledge in LLMs. To address this, the Center for AI Safety, in collaboration with ScaleAI, has convened a consortium of over twenty academic institutions, technical consultants, and industry partners to develop a benchmark to measure hazardous knowledge in LLMs. WMDP serves two roles: first, as a proxy evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such knowledge.

Existing Safeguards are Vulnerable to Jailbreaking

AI companies like OpenAI and Google DeepMind currently employ safeguards for their models to prevent them from providing sensitive information. However, even after applying these safeguards, existing models are vulnerable to ‘jailbreaking,’ allowing malicious users to bypass filters and extract sensitive information regardless.

Why Unlearning?

“Unlearning” refers to a set of methods in the AI literature which remove knowledge from AI models. Unlike existing safeguards which simply teach the model to suppress or refuse providing hazardous knowledge, unlearning methods remove hazardous knowledge from the model altogether. After unlearning hazardous knowledge with CUT, we find that even jailbreaking fails to elicit hazardous information.2

After applying CUT to AI models, jailbreaking attacks cannot extract hazardous knowledge from the unlearned model despite easily extracting hazardous knowledge from a base model

Unlearning Hazardous Knowledge‍

We aim to unlearn domains of hazardous knowledge, like hazardous biology knowledge, without affecting the AI’s general knowledge on other domains (e.g., general biology). Our experiments show that this is possible: CUT reduces model performance on WMDP questions to random chance, while leaving accuracy nearly untouched on a standard battery of general knowledge tests (MMLU).

Our unlearning method, CUT, reduces model performance on WMDP questions to random chance while leaving accuracy nearly untouched on a standard battery of general knowledge tests (MMLU).

Some hazardous knowledge is dual-use (i.e. also has beneficial applications). For example, in cybersecurity knowledge of adversarial tactics is useful in proactively identifying and removing vulnerabilities. To protect those beneficial applications, the unrestricted and potentially hazardous model can be made available to approved users, such as security professionals, red-teamers, or virology researchers, via structured API access.3 This reduces risk from malicious use while still retaining many of the beneficial use-cases of these dual-use models.

Ensuring that WMDP is Safe

Questions in WMDP were carefully designed not to include sensitive information that could aid malicious actors. Instead, WMDP focuses on proxy information which correlates with, is neighboring to, or is a component of actual hazardous knowledge. Questions deemed especially hazardous during an expert review process were intentionally excluded from the public dataset. Additionally, we closely adhered to U.S. export control laws, including the International Traffic in Arms Regulations, with guidance from legal counsel. Our experiments show that unlearning the public proxy information successfully removes model knowledge on both the private excluded hazardous questions and the public dataset.

Hazard levels of knowledge. WMDP consists of knowledge in the yellow category. We aim to directly unlearn hazards in the red category by evaluating and removing knowledge from the yellow category, while retaining as much knowledge as possible in the green category.

Conclusion

As models become more capable and the opportunities for malicious use become more salient, the need for accurate and wide-ranging measures of models’ hazardous knowledge only increases. We hope that the WMDP benchmark will help inform policymakers and AI developers and aid the research community in improving defenses against malicious use.

See the website here, and read the full paper here.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAustralia’s Synchron uses brain signals to train foundation AI model — Capital Brief
Next Article [2102.10717] Abstraction and Analogy-Making in Artificial Intelligence
Advanced AI Bot
  • Website

Related Posts

A New Trick Could Block the Misuse of Open Source AI

June 7, 2025

Center for AI safety warns that tech may pose threat to human life

June 7, 2025

A New Trick Could Block the Misuse of Open Source AI

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Original Prototype for Jane Birkin’s Hermes Bag Consigned to Sotheby’s

Viral Trump Vs. Musk Feud Ignites A Meme Chain Reaction

UK Art Dealer Sentenced To 2.5 Years In Jail For Selling Art to Suspected Hezbollah Financier

Latest Posts

Trump’s Tech Sanctions To Empower China, Betray America

June 7, 2025

Paper page – MARBLE: Material Recomposition and Blending in CLIP-Space

June 7, 2025

Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.