Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Inside the Navy’s DoN GPT tool; Claude, Llama AI tools can now be used with sensitive data in Amazon’s government cloud

How Cursor and Claude Are Developing AI Coding Tools Together

Darren Aronofsky’s First Gen-AI Film Goes Inside the Womb

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Five quick updates about that Apple reasoning paper that people can’t stop talking about
Gary Marcus

Five quick updates about that Apple reasoning paper that people can’t stop talking about

Advanced AI BotBy Advanced AI BotJune 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Five quick updates on that Apple paper everybody is talking about (see my summary here, if you don’t yet know what the fuss is about).

The core of the paper challenged the idea that LLMs can reason on any deep level, and showed that they broke down on moderately complex versions of basic algorithms (since Tower of Hanoi with 8 discs, first solved with classical techniques around 1957) — casting serious doubt into whether LLMs on their own could ever achieve AGI.

It was the latest in a long line of demonstrations (going back to my own work in 1998) of neural networks struggling with what is now known as distribution shift: generalization beyond what they have been trained on. To some extent they do this better than in 1998, but is still the core challenge.

Needless to say AI enthusiasts are hoping mad. In effort to save face, many of them pointing to a rejoinder cowritten by one Anthropic’s Claude (under the pen name C. Opus) called “The Illusion of the Illusion of Thinking” that allegedly refutes the Apple paper. Emphasis on allegedly.

Facts are not on their side.

“The illusion of the illusion” turned out to be an error-ridden joke. Literally. (If you read that last sentence carefully, you will see there are two links, not one; the first points out that there are multiple mathematical errors, the second is for an essay by the guy who created the Sokal-hoax style joke that went viral, acknowledging with chagrin. In short, the whole thing was a put on — unbeknownst to the zillions who reposted it. I kid you not.

That said, loads of people who are trying to cope with the Apple paper are still circulating the Opus-co-written as if it were real and convincing. Pathetic. It’s especially pathetic because I had already addressed and dissected the paper’s main claim in my Seven Replies essay a few days ago, showing its complexity claims didn’t explain the actual data.

On that topic, nobody has written a convincing reply to my Seven Replies essay, despite the fact that about 90,000 people have read it. Excuse me for speculating, but I suspect that there has been no compelling reply because the hypesters don’t have a compelling answer. That doesn’t look good for skeptics of the Apple paper.

Computational linguist @msukhareva just added a technical dissection of the Opus-written paper, which you can find here, concluding, much as I do, “All in all, the Apple paper still stands its grounds and the LLM-generated debunking is weak at best.”

A new paper with a hard coding benchmark called LiveCodeBenchPro designed to resist data contamination shows still more converging evidence for the two core notions behind the Apple paper (and behind my own decades-long critique): (a) the systems are challenged at reasoning, and (b) performance declines as one strays further and further from familiar problems.

Rohan Paul nicely summarizes the new benchmark, created by authors from multiple universities in a thread that starts here:

In sum, performance on hard coding problems, an alleged area of LLM strength, drops to zero when you control nicely for contamination. This strongly echoes a result that Ernest Davis and I discussed here in April, in which AI performance dropped precipitously on problems that were tested within six hours of a contest, making problem-specific data-augmentation difficult.

Bottom line? What the Apple paper said still looks to be basically correct. Nitpicking (with or without mathematical blunders) is not going to change the fact that GenAI is still struggling with distribution shift, after almost 3 decades of work. On familiar problems with no wrinkles, they are great. On anything else they are suspect.

Or, as the software engineer Gorgi Kosev just put it on X, “[LLMs] are decent solvers of already solved problems, indeed.”

If we want to get to AGI, we’ll need to do a lot better.

Gary Marcus is sorry to keep writing about how LLMs struggle with distribution shift, but it’s the whole ballgame.

.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIsraeli Attacks on Palestinian Heritage Constitute War Crimes: Report
Next Article How to Build a Successful Robotics Company – Colin Angle, iRobot CEO | AI Podcast Clips
Advanced AI Bot
  • Website

Related Posts

Seven replies to the viral Apple reasoning paper – and why they fall short

June 12, 2025

A knockout blow for LLMs? – by Gary Marcus

June 7, 2025

AI literacy, hallucinations, and the law: A case study

May 24, 2025
Leave A Reply Cancel Reply

Latest Posts

Israeli Attacks on Palestinian Heritage Constitute War Crimes: Report

UOVO to Expand Facilities in Brooklyn

Former Sotheby’s Vet Launches Art Lending Firm with Nahmads’ Backing

Orange County Museum of Art Discusses Merger with UC Irvine

Latest Posts

Inside the Navy’s DoN GPT tool; Claude, Llama AI tools can now be used with sensitive data in Amazon’s government cloud

June 18, 2025

How Cursor and Claude Are Developing AI Coding Tools Together

June 18, 2025

Darren Aronofsky’s First Gen-AI Film Goes Inside the Womb

June 18, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.