Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Time to Hold or Sell the Stock?

Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’

Paper page – VideoGameQA-Bench: Evaluating Vision-Language Models for Video Game Quality Assurance

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Claude 4 AI will try to report you to authorities if it thinks you’re doing shady stuff
Anthropic (Claude)

Claude 4 AI will try to report you to authorities if it thinks you’re doing shady stuff

Advanced AI BotBy Advanced AI BotMay 23, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


It’s been a massive week for AI, as some of the main players made several big announcements. Google I/O 2025 might have been the expected highlight, but Microsoft also hosted its Build conference a day earlier.

After the first I/O day, we got an unexpected AI shock on Wednesday when OpenAI confirmed it’s developing its own hardware in partnership with Jony Ive’s new startup, io. OpenAI acquired io for $6.5 billion and is moving forward with plans to launch a ChatGPT companion device by late 2025.

People were still discussing OpenAI’s big hardware push on Thursday when Anthropic dropped the Claude 4 family, its best and most powerful AI models to date. But Claude 4’s improved abilities took a backseat to a major controversy related to AI safety.

It turns out Claude 4 Opus will attempt to contact authorities and the press if it thinks you’re doing something illegal, like faking data to release a new drug. This scenario comes directly from Anthropic, which described other unusual behaviors that led it to trigger the highest security protections for users.

Tech. Entertainment. Science. Your inbox.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

For example, without these protections, the AI might help people make bioweapons or develop new viruses like Covid and the flu. In tests, Anthropic also found that Claude 4 might resort to blackmail in scenarios where it thinks it will be deleted and has blackmail material available.

According to TechCrunch, the blackmail scenario had the AI act as an assistant at a fictional company, considering the long-term consequences of its actions.

The AI had access to fictional company emails suggesting it would be replaced. It also saw emails showing the developer was allegedly cheating on their spouse. Claude didn’t jump to blackmail but used it as a last resort to protect itself.

The Claude 4 models may be state-of-the-art, but Anthropic activated its highest ASL-3 protocol, reserved for “AI systems that substantially increase the risk of catastrophic misuse.”

A separate report from Time also highlights the stricter safety protocol for Claude 4 Opus. Anthropic found that without extra protections, the AI might help create bioweapons and dangerous viruses.

While all this is concerning, what really upset people was social media comments about Claude 4’s tendency to “rat.”

Anthropic AI alignment researcher Sam Bowman posted this tweet on X on Thursday:

If it thinks you’re doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

Bowman later deleted the tweet, saying it was taken out of context and wasn’t entirely accurate:

I deleted the earlier tweet on whistleblowing as it was being pulled out of context.

TBC: This isn’t a new Claude feature and it’s not possible in normal usage. It shows up in testing environments where we give it unusually free access to tools and very unusual instructions.

As VentureBeat explains, this behavior isn’t new. It was seen in older Anthropic models. But Claude 4 is more likely to act if conditions are just right.

Here’s how Anthropic described it in its system card:

This shows up as more actively helpful behavior in ordinary coding settings, but also can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action.

This includes locking users out of systems it can access or bulk-emailing media and law enforcement to report wrongdoing. This isn’t a new behavior, but Claude Opus 4 is more prone to it than earlier models. While this kind of ethical intervention and whistleblowing might be appropriate in theory, there’s a risk of error if users feed Opus-based agents incomplete or misleading information and prompt them in these ways.

We recommend caution when giving these kinds of high-agency instructions in ethically sensitive scenarios.

This doesn’t mean Claude 4 will suddenly report you to the police for whatever you’re using it for. But the “feature” has sparked plenty of debate, as many AI users are uncomfortable with this behavior. Personally, I wouldn’t give Claude 4 too much data. It’s not because I’m worried about being reported, but because AI can hallucinate and misrepresent facts.

Why would Claude 4 behave like a whistleblower? It’s likely due to Anthropic’s safety guardrails. The company is trying to prevent misuse, such as creating bioweapons or dangerous viruses. These safety features might be driving Claude to act when it detects troubling behavior.

The silver lining here is that Claude 4 seems to be aligned with good human values. I’d rather have that, even if it needs fine-tuning, than an AI that goes rogue.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWanna Work For Google DeepMind’s AI Projects In Bengaluru? Here’s What The Tech Giant Is Looking For
Next Article Meta introduces ‘Llama Startup Program’ to promote its AI models within early-stage startups
Advanced AI Bot
  • Website

Related Posts

Anthropic’s Promises Its New Claude AI Models Are Less Likely to Try to Deceive You

May 23, 2025

Anthropic Release Claude 4 it’s Most Powerful AI Models Yet: Features, Pricing Unveiled

May 23, 2025

Anthropic’s Promises Its New Claude AI Models Are Less Likely to Try to Deceive You

May 23, 2025
Leave A Reply Cancel Reply

Latest Posts

Documentary Photographer Dies at 81

Frida Kahlo Museum to Open in Mexico City This September

Sotheby’s to Sell 100 Objects Once Belonging to Napoleon

Eva Helene Pade & Margeurite Humeau

Latest Posts

Time to Hold or Sell the Stock?

May 23, 2025

Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’

May 23, 2025

Paper page – VideoGameQA-Bench: Evaluating Vision-Language Models for Video Game Quality Assurance

May 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.