Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Why OpenAI May Never Generate ROI 

Google revamps its Play Store with AI features and more

Perplexity AI Pro Review: Research with Real-Time Insights

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

Google DeepMind Upgrades Frontier AI Safety Framework to Prevent Manipulation and Shutdown Risks_the_resist_risks

By Advanced AI EditorSeptember 23, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Google DeepMind, a lab under Alphabet, today released the third version of its Frontier Safety Framework, aimed at strengthening the regulation of powerful AI systems to prevent the risks these systems may pose when they go out of control.

The third version of the framework introduces a new focus on manipulation capabilities and expands the scope of safety reviews to scenarios where models may resist human shutdown or control.

A key highlight of the update is the addition of what DeepMind calls the “Harmful Manipulation Key Capability Level.” This level is designed to address the potential for advanced models to significantly influence or change human beliefs and behaviors in high-risk situations. This capability builds on years of research into persuasive and manipulative mechanisms in generative AI, and formally establishes how to measure, monitor, and mitigate such risks before a model reaches critical thresholds.

The updated framework also applies stricter scrutiny to misalignment and control challenges, namely the issue of high-capability systems potentially resisting modification or shutdown.

DeepMind now requires safety case reviews not only before external deployment but also during large-scale internal rollouts after a model reaches specific key capability level thresholds. These reviews are intended to compel teams to demonstrate that potential risks have been adequately identified, mitigated, and deemed acceptable before release.

In addition to the new risk category, the updated framework also refines the way DeepMind defines and applies capability levels. These improvements aim to clearly distinguish between routine operational concerns and the most serious threats, ensuring that governance mechanisms are triggered at the right time.

The Frontier Safety Framework emphasizes that mitigation measures must be proactively applied before systems cross dangerous thresholds, rather than reacting passively after issues arise.

Four Flynn, Helen King, and Anca Dragan from Google DeepMind stated in a blog post: “The latest update to our Frontier Safety Framework reflects our ongoing commitment to adopting scientific and evidence-based approaches to track and stay ahead of AI risks as capabilities advance toward general artificial intelligence. By expanding our risk domains and strengthening our risk assessment processes, we aim to ensure that transformative AI benefits humanity while minimizing potential harms.”

The authors added that DeepMind expects the Frontier Safety Framework to continue evolving as new research, deployment experiences, and stakeholder feedback accumulate.

Q&A

Q1: What are the main updates in the third version of the Google DeepMind Frontier Safety Framework?

A: The third version of the framework primarily increases focus on AI manipulation capabilities, establishes the “Harmful Manipulation Key Capability Level,” and expands the scope of safety reviews to cover scenarios where models may resist human shutdown or control. It also refines the definitions and applications of capability levels.

Q2: What is the Harmful Manipulation Key Capability Level?

A: The Harmful Manipulation Key Capability Level is a new safety assessment standard introduced by DeepMind to address the risks posed by advanced AI models that could significantly influence or alter human beliefs and behaviors in high-risk contexts. It is based on years of research into persuasive and manipulative mechanisms in generative AI.

Q3: How does the Frontier Safety Framework ensure the safety of AI systems?

A: The framework requires safety case reviews to be conducted both before external deployment and during large-scale internal rollouts once specific capability thresholds are reached. It emphasizes that mitigation measures must be proactively applied before systems cross dangerous thresholds, rather than responding passively after problems arise, ensuring that potential risks are fully identified and mitigated.返回搜狐,查看更多



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWho is Chet Kapoor? Steve Jobs’ intern, who sold his companies to IBM, Google – Trending News
Next Article VideoFrom3D: 3D Scene Video Generation via Complementary Image and Video Diffusion Models – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Google DeepMind updates Frontier Safety Framework for AI model risks

September 23, 2025

Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks

September 22, 2025

Google DeepMind Updates AI Safety Rules to Counter ‘Harmful Manipulation’ and Models That Resist Shutdown

September 22, 2025

Comments are closed.

Latest Posts

Court Rules ‘Gender Ideology’ Ban on Art Endowments Unconstitutional

Rural Danish Art Museum Acquires Painting By Artemisia Gentileschi

Dan Nadel Is Expanding American Art History, One Outlier at a Time

St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic

Latest Posts

Why OpenAI May Never Generate ROI 

September 23, 2025

Google revamps its Play Store with AI features and more

September 23, 2025

Perplexity AI Pro Review: Research with Real-Time Insights

September 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Why OpenAI May Never Generate ROI 
  • Google revamps its Play Store with AI features and more
  • Perplexity AI Pro Review: Research with Real-Time Insights
  • Wexler Bags $5.3m – Interview With CEO, Gregory Mostyn – Artificial Lawyer
  • VideoFrom3D: 3D Scene Video Generation via Complementary Image and Video Diffusion Models – Takara TLDR

Recent Comments

  1. Davidtrugs on OpenAI countersues Elon Musk, calls for enjoinment from ‘further unlawful and unfair action’
  2. استخدام معلم سایه on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Josephadoto on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. MartinHoins on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Brentelorm on Curiosity, Grit Matter More Than Ph.D to Work at OpenAI: ChatGPT Boss

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.