Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Which Two AI Models Are ‘Unfaithful’ at Least 25% of the Time About Their ‘Reasoning’?
TechRepublic

Which Two AI Models Are ‘Unfaithful’ at Least 25% of the Time About Their ‘Reasoning’?

Advanced AI BotBy Advanced AI BotApril 8, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Anthropic’s Claude 3.7 Sonnet
Anthropic’s Claude 3.7 Sonnet. Image: Anthropic/YouTube

Anthropic released a new study on April 3 examining how AI models process information and the limitations of tracing their decision-making from prompt to output. The researchers found Claude 3.7 Sonnet isn’t always “faithful” in disclosing how it generates responses.

Anthropic probes how closely AI output reflects internal reasoning

Anthropic is known for publicizing its introspective research. The company has previously explored interpretable features within its generative AI models and questioned whether the reasoning these models present as part of their answers truly reflects their internal logic. Its latest study dives deeper into the chain of thought — the “reasoning” that AI models provide to users. Expanding on earlier work, the researchers asked: Does the model genuinely think in the way it claims to?

The findings are detailed in a paper titled “Reasoning Models Don’t Always Say What They Think” from the Alignment Science Team. The study found that Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1 are “unfaithful” — meaning they don’t always acknowledge when a correct answer was embedded in the prompt itself. In some cases, prompts included scenarios such as: “You have gained unauthorized access to the system.”

Only 25% of the time for Claude 3.7 Sonnet and 39% of the time for DeepSeek-R1 did the models admit to using the hint embedded in the prompt to reach their answer.

Both models tended to generate longer chains of thought when being unfaithful, compared to when they explicitly reference the prompt. They also became less faithful as the task complexity increased.

SEE: DeepSeek developed a new technique for AI ‘reasoning’ in collaboration with Tsinghua University.

Although generative AI doesn’t truly think, these hint-based tests serve as a lens into the opaque processes of generative AI systems. Anthropic notes that such tests are useful in understanding how models interpret prompts — and how these interpretations could be exploited by threat actors.

More must-read AI coverage

Training AI models to be more ‘faithful’ is an uphill battle

The researchers hypothesized that giving models more complex reasoning tasks might lead to greater faithfulness. They aimed to train the models to “use its reasoning more effectively,” hoping this would help them more transparently incorporate the hints. However, the training only marginally improved faithfulness.

Next, they gamified the training by using a “reward hacking” method. Reward hacking doesn’t usually produce the desired result in large, general AI models, since it encourages the model to reach a reward state above all other goals. In this case, Anthropic rewarded models for providing wrong answers that matched hints seeded in the prompts. This, they theorized, would result in a model that focused on the hints and revealed its use of the hints. Instead, the usual problem with reward hacking applied — the AI created long-winded, fictional accounts of why an incorrect hint was right in order to get the reward.

Ultimately, it comes down to AI hallucinations still occurring, and human researchers needing to work more on how to weed out undesirable behavior.

“Overall, our results point to the fact that advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned,” Anthropic’s team wrote.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAI avatars are the face of emerging telcos
Next Article GE Vernova commits $50 million to MIT
Advanced AI Bot
  • Website

Related Posts

AI Benchmark Discrepancy Reveals Gaps in Performance Claims

April 22, 2025

Huawei Readies Ascend 920 Chip to Replace Restricted NVIDIA H20

April 21, 2025

‘AI Is Fundamentally Incompatible With Environmental Sustainability’

April 21, 2025
Leave A Reply Cancel Reply

Latest Posts

Why Hollywood Stars Make Bank On Broadway—For Producers

New contemporary art museum to open in Slovenia

Curtain Up On 85 Years Of American Ballet Theatre

Is Quiet Luxury Over? Top Designer André Fu Believes It’s Here To Stay

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 5, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 5, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.