Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

DOGE has built an AI tool to slash federal regulations

Who is Lamini Fati, the teenaged Leganés defender set to sign for Real Madrid?

‘It’s how we use this for learning.’ Lenox and Lee schools partner with MIT to prepare students for the AI revolution | Central Berkshires

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

By Advanced AI EditorApril 16, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI has updated its Preparedness Framework — the internal system it uses to assess the safety of AI models and determine necessary safeguards during development and deployment. In the update, OpenAI stated that it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” system without similar protections in place.

The change reflects the increasing competitive pressures on commercial AI developers to deploy models quickly. OpenAI has been accused of lowering safety standards in favor of faster releases, and of failing to deliver timely reports detailing its safety testing. Last week, 12 former OpenAI employees filed a brief in Elon Musk’s case against OpenAI, arguing the company would be encouraged to cut even more corners on safety should it complete its planned corporate restructuring.

Perhaps anticipating criticism, OpenAI claims that it wouldn’t make these policy adjustments lightly, and that it would keep its safeguards at “a level more protective.”

“If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements,” wrote OpenAI in a blog post published Tuesday afternoon. “However, we would first rigorously confirm that the risk landscape has actually changed, publicly acknowledge that we are making an adjustment, assess that the adjustment does not meaningfully increase the overall risk of severe harm, and still keep safeguards at a level more protective.”

The refreshed Preparedness Framework also makes clear that OpenAI is relying more heavily on automated evaluations to speed up product development. The company says that while it hasn’t abandoned human-led testing altogether, it has built “a growing suite of automated evaluations” that can supposedly “keep up with [a] faster [release] cadence.”

Some reports contradict this. According to the Financial Times, OpenAI gave testers less than a week for safety checks for an upcoming major model — a compressed timeline compared to previous releases. The publication’s sources also alleged that many of OpenAI’s safety tests are now conducted on earlier versions of models rather than the versions released to the public.

In statements, OpenAI has disputed the notion that it’s compromising on safety.

OpenAI is quietly reducing its safety commitments.

Omitted from OpenAI’s list of Preparedness Framework changes:

No longer requiring safety tests of finetuned models https://t.co/oTmEiAtSjS

— Steven Adler (@sjgadler) April 15, 2025

Other changes to OpenAI’s framework pertain to how the company categorizes models according to risk, including models that can conceal their capabilities, evade safeguards, prevent their shutdown, and even self-replicate. OpenAI says that it’ll now focus on whether models meet one of two thresholds: “high” capability or “critical” capability.

OpenAI’s definition of the former is a model that could “amplify existing pathways to severe harm.” The latter are models that “introduce unprecedented new pathways to severe harm,” per the company.

“Covered systems that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before they are deployed,” wrote OpenAI in its blog post. “Systems that reach critical capability also require safeguards that sufficiently minimize associated risks during development.”

The updates are the first OpenAI has made to the Preparedness Framework since 2023.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAssistant – ChatGPT’s Open Alternative (We need your help!)
Next Article This AI already writes 20% of Salesforce’s code. Here’s why developers aren’t worried
Advanced AI Editor
  • Website

Related Posts

DOGE has built an AI tool to slash federal regulations

July 27, 2025

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

July 26, 2025

Meta names Shengjia Zhao as chief scientist of AI superintelligence unit

July 25, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

DOGE has built an AI tool to slash federal regulations

July 27, 2025

Who is Lamini Fati, the teenaged Leganés defender set to sign for Real Madrid?

July 27, 2025

‘It’s how we use this for learning.’ Lenox and Lee schools partner with MIT to prepare students for the AI revolution | Central Berkshires

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • DOGE has built an AI tool to slash federal regulations
  • Who is Lamini Fati, the teenaged Leganés defender set to sign for Real Madrid?
  • ‘It’s how we use this for learning.’ Lenox and Lee schools partner with MIT to prepare students for the AI revolution | Central Berkshires
  • This AI Learns Faster Than Anything We’ve Seen!
  • ByteDance’s Doubao: China’s answer to GPT-4o is 50x cheaper and ready for action: Details – Technology News

Recent Comments

  1. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  2. Rejestracja on Online Education – How I Make My Videos
  3. Anonymous on AI, CEOs, and the Wild West of Streaming
  4. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  5. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.