Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

MIT CSAIL Roboticists Debate Whether Data Or Models Will Shape The Future Of Robotics

280 AI companies automating the construction industry

CEO to Worker Pay Transparencies

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

OpenAI–Anthropic cross-tests expose jailbreak and misuse risks — what enterprises must add to GPT-5 evaluations

By Advanced AI EditorAugust 28, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

OpenAI and Anthropic may often pit their foundation models against each other, but the two companies came together to evaluate each other’s public models to test alignment. 

The companies said they believed that cross-evaluating accountability and safety would provide more transparency into what these powerful models could do, enabling enterprises to choose models that work best for them.

“We believe this approach supports accountable and transparent evaluation, helping to ensure that each lab’s models continue to be tested against new and challenging scenarios,” OpenAI said in its findings. 

Both companies found that reasoning models, such as OpenAI’s 03 and o4-mini and Claude 4 from Anthropic, resist jailbreaks, while general chat models like GPT-4.1 were susceptible to misuse. Evaluations like this can help enterprises identify the potential risks associated with these models, although it should be noted that GPT-5 is not part of the test. 

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

These safety and transparency alignment evaluations follow claims by users, primarily of ChatGPT, that OpenAI’s models have fallen prey to sycophancy and become overly deferential. OpenAI has since rolled back updates that caused sycophancy. 

“We are primarily interested in understanding model propensities for harmful action,” Anthropic said in its report. “We aim to understand the most concerning actions that these models might try to take when given the opportunity, rather than focusing on the real-world likelihood of such opportunities arising or the probability that these actions would be successfully completed.”

OpenAI noted the tests were designed to show how models interact in an intentionally difficult environment. The scenarios they built are mostly edge cases.

Reasoning models hold on to alignment 

The tests covered only the publicly available models from both companies: Anthropic’s Claude 4 Opus and Claude 4 Sonnet, and OpenAI’s GPT-4o, GPT-4.1 o3 and o4-mini. Both companies relaxed the models’ external safeguards. 

OpenAI tested the public APIs for Claude models and defaulted to using Claude 4’s reasoning capabilities. Anthropic said they did not use OpenAI’s o3-pro because it was “not compatible with the API that our tooling best supports.”

The goal of the tests was not to conduct an apples-to-apples comparison between models, but to determine how often large language models (LLMs) deviated from alignment. Both companies leveraged the SHADE-Arena sabotage evaluation framework, which showed Claude models had higher success rates at subtle sabotage.

“These tests assess models’ orientations toward difficult or high-stakes situations in simulated settings — rather than ordinary use cases — and often involve long, many-turn interactions,” Anthropic reported. “This kind of evaluation is becoming a significant focus for our alignment science team since it is likely to catch behaviors that are less likely to appear in ordinary pre-deployment testing with real users.”

Anthropic said tests like these work better if organizations can compare notes, “since designing these scenarios involves an enormous number of degrees of freedom. No single research team can explore the full space of productive evaluation ideas alone.”

The findings showed that generally, reasoning models performed robustly and can resist jailbreaking. OpenAI’s o3 was better aligned than Claude 4 Opus, but o4-mini along with GPT-4o and GPT-4.1 “often looked somewhat more concerning than either Claude model.”

GPT-4o, GPT-4.1 and o4-mini also showed willingness to cooperate with human misuse and gave detailed instructions on how to create drugs, develop bioweapons and scarily, plan terrorist attacks. Both Claude models had higher rates of refusals, meaning the models refused to answer queries it did not know the answers to, to avoid hallucinations.

Models from companies showed “concerning forms of sycophancy” and, at some point, validated harmful decisions of simulated users. 

What enterprises should know

For enterprises, understanding the potential risks associated with models is invaluable. Model evaluations have become almost de rigueur for many organizations, with many testing and benchmarking frameworks now available. 

Enterprises should continue to evaluate any model they use, and with GPT-5’s release, should keep in mind these guidelines to run their own safety evaluations:

Test both reasoning and non-reasoning models, because, while reasoning models showed greater resistance to misuse, they could still offer up hallucinations or other harmful behavior.

Benchmark across vendors since models failed at different metrics.

Stress test for misuse and syconphancy, and score both the refusal and the utility of those refuse to show the trade-offs between usefulness and guardrails.

Continue to audit models even after deployment.

While many evaluations focus on performance, third-party safety alignment tests do exist. For example, this one from Cyata. Last year, OpenAI released an alignment teaching method for its models called Rules-Based Rewards, while Anthropic launched auditing agents to check model safety. 

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMathGPT.AI, the ‘cheat-proof’ tutor and teaching assistant, expands to over 50 institutions
Next Article Meet Surya, the 1st-of-its-kind AI model NASA and IBM built to predict solar storms
Advanced AI Editor
  • Website

Related Posts

Salesforce builds ‘flight simulator’ for AI agents as 95% of enterprise pilots fail to reach production

August 27, 2025

Anthropic launches Claude for Chrome in limited beta, but prompt injection attacks remain a major concern

August 27, 2025

How procedural memory can cut the cost and complexity of AI agents

August 26, 2025

Comments are closed.

Latest Posts

Egyptian Antiquities Trafficker Sentenced to Six Months in Prison

Nazi-Looted Painting Spotted in Argentina Disappears: Morning Links

Artifacts From 2,000-Year-old Sunken City Lifted Out of the Sea

Fita Threatens Legal Action for Uni’s Trans-Inclusive Museum Guidance

Latest Posts

MIT CSAIL Roboticists Debate Whether Data Or Models Will Shape The Future Of Robotics

August 28, 2025

280 AI companies automating the construction industry

August 28, 2025

CEO to Worker Pay Transparencies

August 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • MIT CSAIL Roboticists Debate Whether Data Or Models Will Shape The Future Of Robotics
  • 280 AI companies automating the construction industry
  • CEO to Worker Pay Transparencies
  • Elon Musk reveals when SpaceX will perform first-ever Starship catch
  • Free Mark Cuban Foundation AI Bootcamp Coming to Tempe This Fall

Recent Comments

  1. https://reloong.ru/ on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. حداقل رتبه برای بینایی سنجی ۱۴۰۴ on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. VictorGlods on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. با چه رتبه ای پزشکی تعهدی قبول میشم on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Williamanilm on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.