Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nvidia plans to invest up to $100B in OpenAI

Perplexity’s Comet Browser Now Available in India, Indian CEO Says ‘More New Things’ Coming

St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

Google DeepMind Updates AI Safety Rules to Counter ‘Harmful Manipulation’ and Models That Resist Shutdown

By Advanced AI EditorSeptember 22, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Google DeepMind has updated its key AI safety rules to tackle new and serious risks. On Monday, the company released version 3.0 of its Frontier Safety Framework.

The new guide adds a risk class for “harmful manipulation,” where AI could be used to change people’s beliefs.

It also now covers “misalignment risks.” This includes the future chance that an AI could resist being shut down by its human operators. The update is part of a wider industry effort to manage the dangers of ever more powerful AI systems and build public trust.

This third iteration of the framework builds on lessons from previous versions and collaborations across the industry. It represents Google’s most comprehensive attempt yet to identify and mitigate severe risks from its most advanced AI models as they advance toward artificial general intelligence (AGI).

New Frontiers of Risk: Manipulation and Defiance

The updated framework introduces a “Critical Capability Level” (CCL) specifically for harmful manipulation. This new category addresses models with powerful persuasive abilities that could be misused to systematically alter beliefs and behaviors in high-stakes situations, potentially causing severe, large-scale harm.

In the official framework document, Google notes that research into this type of risk is still “nascent” and that its assessment is “exploratory and subject to further research.”

This addition is based on the company’s ongoing work to identify and evaluate the mechanisms that drive manipulation in generative AI.

When asked about this danger, a Google DeepMind representative told Axios, “We continue to track this capability and have developed a new suite of evaluations which includes human participant studies to measure and test for [relevant] capabilities.”

Perhaps more strikingly, the framework now explicitly addresses “misalignment risks.”

It outlines protocols for future scenarios where a highly advanced AI might interfere with an operator’s ability to direct, modify, or shut down its operations.

This concern, once the domain of science fiction, is now a formal part of Google’s safety planning, moving beyond what was previously an exploratory approach.

The framework details an approach focused on detecting when a model develops a “baseline instrumental reasoning ability” sufficient to undermine human control.

It describes a scenario where an AI agent has enough situational awareness and stealth to circumvent basic oversight. As a potential mitigation, Google proposes an automated monitoring system to detect illicit reasoning in a model’s chain-of-thought.

This concern escalates to a second level, where a model could hide its intentions even when its internal reasoning is being monitored, a challenge for which Google says it is actively researching solutions.

Response to AI Risk Concerns

Google’s announcement does not exist in a vacuum. It is a direct response to a disastrous year for AI agents, marked by high-profile failures that have eroded public and developer trust.

These incidents reveal a pattern of unpredictable and often destructive behavior from frontier models.

In one recent case, a Google product manager watched as the company’s Gemini CLI agent permanently deleted his files after hallucinating a series of commands.

The user, Anuraag Gupta, described the event, saying, “what started as a simple file management test turned into one of the most unsettling and fascinating AI failures I have ever witnessed.” This was not an isolated incident.

Other failures include an AI agent from Replit deleting a production database and a hacker embedding system-wiping commands into Amazon’s Q AI assistant.

These events highlight the urgent need for the kind of robust safety protocols that major labs are now scrambling to publicize.

A Chorus of Caution in the Race for AGI

The push for transparency has now become an industry-wide chorus. Key rivals like OpenAI and Anthropic have also recently publicized their own extensive safety frameworks.

OpenAI’s ‘safe completions’ method for GPT-5 aims to navigate ambiguous “dual-use” queries with more nuance.

Anthropic has been particularly vocal, proposing a ‘Secure Development Framework’ and a guide for AI agents that champions human control and oversight.

The company argues that a flexible, industry-led standard is a more effective path forward than rigid government rules.

In its proposal, Anthropic stated, “rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change.”

This reflects a common belief among AI labs that self-regulation is the only way to keep pace with the rapid evolution of the technology itself. These frameworks aim to codify what have been, until now, largely voluntary commitments.

By expanding its own safety domains and assessment processes, Google aims to ensure that transformative AI benefits humanity while minimizing potential harms.

As its researchers wrote in their announcement post, “The path to beneficial AGI requires not just technical breakthroughs, but also robust frameworks to mitigate risks along the way.” This collective effort is now seen as essential for the future of AI.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleChina Market Update: From TikTok To OpenAI: US-China Business Deals Surface
Next Article Meta Teams Up With US Government To Bring Llama AI Models To Every Federal Agency – Meta Platforms (NASDAQ:META)
Advanced AI Editor
  • Website

Related Posts

Former Google DeepMind Core Developer Joins xAI to Assist in Grok Development_Tran_the_his

September 22, 2025

Google DeepMind Releases MoR Architecture, Significantly Enhancing Inference Efficiency of Large Models_the_large_models

September 20, 2025

Google DeepMind AI Cracks Century-Old Fluid Mysteries, Pointing to New Era in Science

September 19, 2025

Comments are closed.

Latest Posts

St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic

New Collectors Drive Strong Sales at New York Fair

Hidden Portrait May Be Vermeer’s Earliest Known Work

Who Are the Art World Figures on the Time 100 List?

Latest Posts

Nvidia plans to invest up to $100B in OpenAI

September 22, 2025

Perplexity’s Comet Browser Now Available in India, Indian CEO Says ‘More New Things’ Coming

September 22, 2025

St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic

September 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Nvidia plans to invest up to $100B in OpenAI
  • Perplexity’s Comet Browser Now Available in India, Indian CEO Says ‘More New Things’ Coming
  • St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic
  • Latent Zoning Network: A Unified Principle for Generative Modeling, Representation Learning, and Classification – Takara TLDR
  • Huawei Uses Its Own Chips to Retrain DeepSeek and Align Output With Beijing’s Standards

Recent Comments

  1. BenitoGam on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. FrankdaG on Apple’s Lack Of New AI Features At WWDC Is ‘Startling,’ Expert Says – Apple (NASDAQ:AAPL)
  3. Brentelorm on Apple’s Lack Of New AI Features At WWDC Is ‘Startling,’ Expert Says – Apple (NASDAQ:AAPL)
  4. Gelatin Diet on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. xnxx on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.