Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025

Nvidia says two mystery customers accounted for 39% of Q2 revenue

Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

Research leaders urge tech industry to monitor AI’s ‘thoughts’

By Advanced AI EditorJuly 16, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI researchers from OpenAI, Google DeepMind, Anthropic, and a broad coalition of companies and nonprofit groups, are calling for deeper investigation into techniques for monitoring the so-called thoughts of AI reasoning models in a position paper published Tuesday.

A key feature of AI reasoning models, such as OpenAI’s o3 and DeepSeek’s R1, are their chains-of-thought or CoTs — an externalized process in which AI models work through problems, similar to how humans use a scratch pad to work through a difficult math question. Reasoning models are a core technology for powering AI agents, and the paper’s authors argue that CoT monitoring could be a core method to keep AI agents under control as they become more widespread and capable.

“CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions,” said the researchers in the position paper. “Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved.”

The position paper asks leading AI model developers to study what makes CoTs “monitorable” — in other words, what factors can increase or decrease transparency into how AI models really arrive at answers. The paper’s authors say that CoT monitoring may be a key method for understanding AI reasoning models, but note that it could be fragile, cautioning against any interventions that could reduce their transparency or reliability.

The paper’s authors also call on AI model developers to track CoT monitorability and study how the method could one day be implemented as a safety measure.

Notable signatories of the paper include OpenAI chief research officer Mark Chen, Safe Superintelligence CEO Ilya Sutskever, Nobel laureate Geoffrey Hinton, Google DeepMind co-founder Shane Legg, xAI safety adviser Dan Hendrycks, and Thinking Machines co-founder John Schulman. First authors include leaders from the U.K. AI Security Institute and Apollo Research, and other signatories come from METR, Amazon, Meta, and UC Berkeley.

The paper marks a moment of unity among many of the AI industry’s leaders in an attempt to boost research around AI safety. It comes at a time when tech companies are caught in a fierce competition — which has led Meta to poach top researchers from OpenAI, Google DeepMind, and Anthropic with million-dollar offers. Some of the most highly sought-after researchers are those building AI agents and AI reasoning models.

Techcrunch event

San Francisco
|
October 27-29, 2025

“We’re at this critical time where we have this new chain-of-thought thing. It seems pretty useful, but it could go away in a few years if people don’t really concentrate on it,” said Bowen Baker, an OpenAI researcher who worked on the paper, in an interview with TechCrunch. “Publishing a position paper like this, to me, is a mechanism to get more research and attention on this topic before that happens.”

OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024. In the months since, the tech industry was quick to release competitors that exhibit similar capabilities, with some models from Google DeepMind, xAI, and Anthropic showing even more advanced performance on benchmarks.

However, there’s relatively little understood about how AI reasoning models work. While AI labs have excelled at improving the performance of AI in the last year, that hasn’t necessarily translated into a better understanding of how they arrive at their answers.

Anthropic has been one of the industry’s leaders in figuring out how AI models really work — a field called interpretability. Earlier this year, CEO Dario Amodei announced a commitment to crack open the black box of AI models by 2027 and invest more in interpretability. He called on OpenAI and Google DeepMind to research the topic more, as well.

Early research from Anthropic has indicated that CoTs may not be a fully reliable indication of how these models arrive at answers. At the same time, OpenAI researchers have said that CoT monitoring could one day be a reliable way to track alignment and safety in AI models.

The goal of position papers like this is to signal boost and attract more attention to nascent areas of research, such as CoT monitoring. Companies like OpenAI, Google DeepMind, and Anthropic are already researching these topics, but it’s possible that this paper will encourage more funding and research into the space.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWhat’s an AI Agent?
Next Article Today’s AI funding rush reminds me of the fintech investing hype of 2021
Advanced AI Editor
  • Website

Related Posts

Nvidia says two mystery customers accounted for 39% of Q2 revenue

August 30, 2025

Taco Bell is having second thoughts about relying on AI at the drive-through 

August 30, 2025

Cracks are forming in Meta’s partnership with Scale AI

August 30, 2025

Comments are closed.

Latest Posts

Woodmere Art Museum Sues Trump Administration Over Canceled IMLS Grant

Barbara Gladstone’s Chelsea Townhouse in NYC Sells for $13.1 M.

Trump Meets with Smithsonian Leader Amid Threats of Content Review

Australian School Faces Pushback over AI Art Course—and More Art News

Latest Posts

Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025

August 31, 2025

Nvidia says two mystery customers accounted for 39% of Q2 revenue

August 30, 2025

Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI

August 30, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025
  • Nvidia says two mystery customers accounted for 39% of Q2 revenue
  • Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI
  • Generative AI Shifts Startup Bottleneck to Product Management
  • Tesla to make app change for easier communication following Service

Recent Comments

  1. a level maths tuition centre on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. ดูซีรีย์ on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. affordable directory sites on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.