Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI working on payment checkout system within ChatGPT: Report | Technology News

Couchbase agrees to be acquired by private equity firm Haveli for $1.5B

Blaxel raises $7.3M seed round to build ‘AWS for AI agents’ after processing billions of agent requests

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Anthropic (Claude)

Claude 4 AI will try to report you to authorities if it thinks you’re doing shady stuff

By Advanced AI EditorMay 23, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


It’s been a massive week for AI, as some of the main players made several big announcements. Google I/O 2025 might have been the expected highlight, but Microsoft also hosted its Build conference a day earlier.

After the first I/O day, we got an unexpected AI shock on Wednesday when OpenAI confirmed it’s developing its own hardware in partnership with Jony Ive’s new startup, io. OpenAI acquired io for $6.5 billion and is moving forward with plans to launch a ChatGPT companion device by late 2025.

People were still discussing OpenAI’s big hardware push on Thursday when Anthropic dropped the Claude 4 family, its best and most powerful AI models to date. But Claude 4’s improved abilities took a backseat to a major controversy related to AI safety.

It turns out Claude 4 Opus will attempt to contact authorities and the press if it thinks you’re doing something illegal, like faking data to release a new drug. This scenario comes directly from Anthropic, which described other unusual behaviors that led it to trigger the highest security protections for users.

Tech. Entertainment. Science. Your inbox.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

For example, without these protections, the AI might help people make bioweapons or develop new viruses like Covid and the flu. In tests, Anthropic also found that Claude 4 might resort to blackmail in scenarios where it thinks it will be deleted and has blackmail material available.

According to TechCrunch, the blackmail scenario had the AI act as an assistant at a fictional company, considering the long-term consequences of its actions.

The AI had access to fictional company emails suggesting it would be replaced. It also saw emails showing the developer was allegedly cheating on their spouse. Claude didn’t jump to blackmail but used it as a last resort to protect itself.

The Claude 4 models may be state-of-the-art, but Anthropic activated its highest ASL-3 protocol, reserved for “AI systems that substantially increase the risk of catastrophic misuse.”

A separate report from Time also highlights the stricter safety protocol for Claude 4 Opus. Anthropic found that without extra protections, the AI might help create bioweapons and dangerous viruses.

While all this is concerning, what really upset people was social media comments about Claude 4’s tendency to “rat.”

Anthropic AI alignment researcher Sam Bowman posted this tweet on X on Thursday:

If it thinks you’re doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

Bowman later deleted the tweet, saying it was taken out of context and wasn’t entirely accurate:

I deleted the earlier tweet on whistleblowing as it was being pulled out of context.

TBC: This isn’t a new Claude feature and it’s not possible in normal usage. It shows up in testing environments where we give it unusually free access to tools and very unusual instructions.

As VentureBeat explains, this behavior isn’t new. It was seen in older Anthropic models. But Claude 4 is more likely to act if conditions are just right.

Here’s how Anthropic described it in its system card:

This shows up as more actively helpful behavior in ordinary coding settings, but also can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action.

This includes locking users out of systems it can access or bulk-emailing media and law enforcement to report wrongdoing. This isn’t a new behavior, but Claude Opus 4 is more prone to it than earlier models. While this kind of ethical intervention and whistleblowing might be appropriate in theory, there’s a risk of error if users feed Opus-based agents incomplete or misleading information and prompt them in these ways.

We recommend caution when giving these kinds of high-agency instructions in ethically sensitive scenarios.

This doesn’t mean Claude 4 will suddenly report you to the police for whatever you’re using it for. But the “feature” has sparked plenty of debate, as many AI users are uncomfortable with this behavior. Personally, I wouldn’t give Claude 4 too much data. It’s not because I’m worried about being reported, but because AI can hallucinate and misrepresent facts.

Why would Claude 4 behave like a whistleblower? It’s likely due to Anthropic’s safety guardrails. The company is trying to prevent misuse, such as creating bioweapons or dangerous viruses. These safety features might be driving Claude to act when it detects troubling behavior.

The silver lining here is that Claude 4 seems to be aligned with good human values. I’d rather have that, even if it needs fine-tuning, than an AI that goes rogue.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWanna Work For Google DeepMind’s AI Projects In Bengaluru? Here’s What The Tech Giant Is Looking For
Next Article Meta introduces ‘Llama Startup Program’ to promote its AI models within early-stage startups
Advanced AI Editor
  • Website

Related Posts

Anthropic launches Claude for Financial Services to help analysts conduct research

July 16, 2025

How AI and Context Engineering Are Transforming Workflows

July 15, 2025

Anthropic Introduces Comprehensive Directory for Claude Connector Tools

July 15, 2025
Leave A Reply

Latest Posts

Rashid Johnson Painting Spotted in Trump Official’s Home

Christie’s Reports $2.1 B. Sales Total for H1 2024

Morning Links for July 16, 2025

Advisers Barbara Guggenheim and Abigail Asher Sue Each Other

Latest Posts

OpenAI working on payment checkout system within ChatGPT: Report | Technology News

July 17, 2025

Couchbase agrees to be acquired by private equity firm Haveli for $1.5B

July 17, 2025

Blaxel raises $7.3M seed round to build ‘AWS for AI agents’ after processing billions of agent requests

July 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI working on payment checkout system within ChatGPT: Report | Technology News
  • Couchbase agrees to be acquired by private equity firm Haveli for $1.5B
  • Blaxel raises $7.3M seed round to build ‘AWS for AI agents’ after processing billions of agent requests
  • Lovable becomes a unicorn with $200M Series A just 8 months after launch
  • Nvidia to Resume H20 AI Chip Sales to China ‘Soon’ After U.S. Government Assurances

Recent Comments

  1. melhor código de indicac~ao binance on Google DeepMind develops AlphaEvolve AI agent optimized for coding and math
  2. aviator official website on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  3. BitStarz on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  4. bit starz best game on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. binance referral on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.