Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

Is vibe coding ruining a generation of engineers?

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Gary Marcus

Five quick updates about that Apple reasoning paper that people can’t stop talking about

By Advanced AI EditorJune 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Five quick updates on that Apple paper everybody is talking about (see my summary here, if you don’t yet know what the fuss is about).

The core of the paper challenged the idea that LLMs can reason on any deep level, and showed that they broke down on moderately complex versions of basic algorithms (since Tower of Hanoi with 8 discs, first solved with classical techniques around 1957) — casting serious doubt into whether LLMs on their own could ever achieve AGI.

It was the latest in a long line of demonstrations (going back to my own work in 1998) of neural networks struggling with what is now known as distribution shift: generalization beyond what they have been trained on. To some extent they do this better than in 1998, but is still the core challenge.

Needless to say AI enthusiasts are hoping mad. In effort to save face, many of them pointing to a rejoinder cowritten by one Anthropic’s Claude (under the pen name C. Opus) called “The Illusion of the Illusion of Thinking” that allegedly refutes the Apple paper. Emphasis on allegedly.

Facts are not on their side.

“The illusion of the illusion” turned out to be an error-ridden joke. Literally. (If you read that last sentence carefully, you will see there are two links, not one; the first points out that there are multiple mathematical errors, the second is for an essay by the guy who created the Sokal-hoax style joke that went viral, acknowledging with chagrin. In short, the whole thing was a put on — unbeknownst to the zillions who reposted it. I kid you not.

That said, loads of people who are trying to cope with the Apple paper are still circulating the Opus-co-written as if it were real and convincing. Pathetic. It’s especially pathetic because I had already addressed and dissected the paper’s main claim in my Seven Replies essay a few days ago, showing its complexity claims didn’t explain the actual data.

On that topic, nobody has written a convincing reply to my Seven Replies essay, despite the fact that about 90,000 people have read it. Excuse me for speculating, but I suspect that there has been no compelling reply because the hypesters don’t have a compelling answer. That doesn’t look good for skeptics of the Apple paper.

Computational linguist @msukhareva just added a technical dissection of the Opus-written paper, which you can find here, concluding, much as I do, “All in all, the Apple paper still stands its grounds and the LLM-generated debunking is weak at best.”

A new paper with a hard coding benchmark called LiveCodeBenchPro designed to resist data contamination shows still more converging evidence for the two core notions behind the Apple paper (and behind my own decades-long critique): (a) the systems are challenged at reasoning, and (b) performance declines as one strays further and further from familiar problems.

Rohan Paul nicely summarizes the new benchmark, created by authors from multiple universities in a thread that starts here:

In sum, performance on hard coding problems, an alleged area of LLM strength, drops to zero when you control nicely for contamination. This strongly echoes a result that Ernest Davis and I discussed here in April, in which AI performance dropped precipitously on problems that were tested within six hours of a contest, making problem-specific data-augmentation difficult.

Bottom line? What the Apple paper said still looks to be basically correct. Nitpicking (with or without mathematical blunders) is not going to change the fact that GenAI is still struggling with distribution shift, after almost 3 decades of work. On familiar problems with no wrinkles, they are great. On anything else they are suspect.

Or, as the software engineer Gorgi Kosev just put it on X, “[LLMs] are decent solvers of already solved problems, indeed.”

If we want to get to AGI, we’ll need to do a lot better.

Gary Marcus is sorry to keep writing about how LLMs struggle with distribution shift, but it’s the whole ballgame.

.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIsraeli Attacks on Palestinian Heritage Constitute War Crimes: Report
Next Article How to Build a Successful Robotics Company – Colin Angle, iRobot CEO | AI Podcast Clips
Advanced AI Editor
  • Website

Related Posts

Game over for pure LLMs. Even Turing Award Winner Rich Sutton has gotten off the bus.

September 26, 2025

Peak bubble – by Gary Marcus

September 11, 2025

OpenAI’s future, foretold? – by Gary Marcus

September 7, 2025
Leave A Reply

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Museums Prepare to Close Their Doors as Government Shutdown Continues

Latest Posts

Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR

October 12, 2025

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

October 12, 2025

Is vibe coding ruining a generation of engineers?

October 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR
  • When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study
  • Is vibe coding ruining a generation of engineers?
  • LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions – Takara TLDR
  • AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

Recent Comments

  1. University.professor.gr on ChatGPT-4 competitor from China: DeepSeek V2 is open source
  2. Drparvincarter.com on Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems
  3. Lara on Veo 3 demo | Owl and badger
  4. sportwetten ohne Oasis mit paypal on ‘Titanic’ and ‘Avatar’ VFX Innovator Robert Legato Joins Stability AI
  5. was bedeutet Quote bei Wetten on A Library of LLM Intrinsics for Retrieval-Augmented Generation

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.