Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Open-source MCPEval makes protocol-level agent testing plug-and-play

Microsoft hires over 20 AI researchers from Google DeepMind

Intuit brings agentic AI to the mid-market saving organizations 17 to 20 hours a month

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Finance AI

Researchers from top AI labs including Google, OpenAI, and Anthropic warn they may be losing the ability to understand advanced AI models

By Advanced AI EditorJuly 22, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A group of 40 AI researchers, including contributors from OpenAI, Google DeepMind, Meta, and Anthropic, are sounding the alarm on the growing opacity of advanced AI reasoning models. In a new paper, the authors urge developers to prioritize research into “chain-of-thought” (CoT) processes, which provide a rare window into how AI systems make decisions. They are warning that as models become more advanced, this visibility could vanish.

AI researchers from leading labs are warning that they could soon lose the ability to understand advanced AI reasoning models.

In a position paper published last week, 40 researchers, including those from OpenAI, Google DeepMind, Anthropic, and Meta, called for more investigation into AI reasoning models’ “chain-of-thought” process. Dan Hendrycks, an xAI safety advisor, is also listed among the authors.

The “chain-of-thought” process, which is visible in reasoning models such as OpenAI’s o1 and DeepSeek’s R1, allows users and researchers to monitor an AI model’s “thinking” or “reasoning” process, illustrating how it decides on an action or answer and providing a certain transparency into the inner workings of advanced models.

The researchers said that allowing these AI systems to “‘think’ in human language offers a unique opportunity for AI safety,” as they can be monitored for the “intent to misbehave.” However, they warn that there is “no guarantee that the current degree of visibility will persist” as models continue to advance.

The paper highlights that experts don’t fully understand why these models use CoT or how long they’ll keep doing so. The authors urged AI developers to keep a closer watch on chain-of-thought reasoning, suggesting its traceability could eventually serve as a built-in safety mechanism.

“Like all other known AI oversight methods, CoT [chain-of-thought] monitoring is imperfect and allows some misbehavior to go unnoticed. Nevertheless, it shows promise, and we recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods,” the researchers wrote.

“CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions. Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved,” they added.

The paper has been endorsed by major figures, including OpenAI co-founder Ilya Sutskever and AI godfather Geoffrey Hinton.

Reasoning Models

AI reasoning models are a type of AI model designed to simulate or replicate human-like reasoning—such as the ability to draw conclusions, make decisions, or solve problems based on information, logic, or learned patterns. Advancing AI reasoning has been viewed as a key to AI progress among major tech companies, with most now investing in building and scaling these models.

OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024, with competitors like xAI and Google following close behind.

However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.

Despite making big leaps in performance over the past year, AI labs still know surprisingly little about how reasoning actually unfolds inside their models. While outputs have improved, the inner workings of advanced models risk becoming increasingly opaque, raising safety and control concerns.

This story was originally featured on Fortune.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWith OpenAI and Shopify As Customers, Ashby Raises $50M Series D For AI-Powered Talent Platform
Next Article DeepSeek’s namesake chatbot sees a drop in downloads as AI apps for work, education rise
Advanced AI Editor
  • Website

Related Posts

White House to unveil plan to push US AI abroad, crackdown on US AI rules, document shows

July 22, 2025

Google and OpenAI’s AI models win milestone gold at global math competition

July 21, 2025

$61.5 billion tech giant Anthropic has made a major hiring U-turn—now, it’s letting job applicants use AI months after banning it from the interview process

July 21, 2025

Comments are closed.

Latest Posts

Archaeologists Identify 5,500-Year-Old Megalithic Tombs in Poland

3,800-Year-Old Warrior’s Tomb Unearthed in Azerbaijan

Removed Romanesque Murals Must Be Returned to Sijena Monastery

President Trump Withdraws US from UNESCO

Latest Posts

Open-source MCPEval makes protocol-level agent testing plug-and-play

July 23, 2025

Microsoft hires over 20 AI researchers from Google DeepMind

July 23, 2025

Intuit brings agentic AI to the mid-market saving organizations 17 to 20 hours a month

July 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Open-source MCPEval makes protocol-level agent testing plug-and-play
  • Microsoft hires over 20 AI researchers from Google DeepMind
  • Intuit brings agentic AI to the mid-market saving organizations 17 to 20 hours a month
  • Alibaba Unveils Qwen3-Coder: Advanced Open-Source AI Model for Software Development
  • Microsoft Raids Google DeepMind To Supercharge AI Copilot, Even As Redmond Cuts 9,000 Jobs Elsewhere: Report – Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)

Recent Comments

  1. binance on OpenAI DALL-E: Fighter Jet For The Mind! ✈️
  2. JeffreyCoalo on Local gov’t reps say they look forward to working with Thomas
  3. Duanepiems on Orange County Museum of Art Discusses Merger with UC Irvine
  4. fpmarkGoods on How Cursor and Claude Are Developing AI Coding Tools Together
  5. avenue17 on Local gov’t reps say they look forward to working with Thomas

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.