Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI Workflows Get New Open Source Tools to Advance Document Intelligence, Data Quality, and Decentralized AI with IBM’s Contribution of 3 projects to Linux Foundation AI and Data

A New Trick Could Block the Misuse of Open Source AI

Capybara caught by China zoo after two months on run

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » OpenAI’s o3 and Tyler Cowen’s Misguided AGI Fantasy
Gary Marcus

OpenAI’s o3 and Tyler Cowen’s Misguided AGI Fantasy

Advanced AI BotBy Advanced AI BotApril 17, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Breaking news, not fiction: OpenAI finally released o3 and Tyler Cowen claimed it was AGI

Tyler Cowen has become the ultimate “AI Influencer”, and I don’t mean that as a compliment. “AI Influencers” are, truth be told, people who pump up AI in order to gain influence, writing wild over-the-top praise of AI without engaging in the drawbacks and limitations. The most egregious of that species also demonize (not just critique) anyone who does point to limitations. Often they come across as quasi-religious. A new essay in the FT yesterday by Siddharth Venkataramakrishnan calls this kind of dreck “slopganda”: produced and distributed by “a circle of Al firms, VCs backing those firms, talking shops made up of employees of those firms, and the long tail is the hangers-on, content creators, newsletter writers and marketing experts.”

Sadly, Cowen, noted economist and podcast regular who has received more than his share of applause lately at The Economist and The Free Press, has joined their ranks, and—not to be outdone—become the most extreme of the lot, leaving even Kevin (AGI will be here in three years) Roose and Casey (AI critics are evil) Newton in the dust, making them look balanced and tempered by comparison.

I’ve noticed this disappointing transformation in Cowen (I used to respect him, and enjoyed our initial conversations in August and November 2021) over the last three years – more or less since ChatGPT dropped and growing steadily worse over time.

More and more his discussions of AI have become entirely one-sided, often featuring over-the-top instantaneous reports from the front line that don’t bear up over time, like one in February in which he alleged that Deep Research had written “a number of ten-page papers [with[ quality as comparable to having a good PhD-level research assistant” without even acknowledging, for example, the massive problem LLMs have with fabricating citations. (A book that Cowen “wrote” with AI last year is sort of similar; it plenty of attention, as a novelty, but I don’t the ideas in it had any lasting impact on economics, whatsoever.)

Like Casey Newton, Cowen also takes regular empty digs at me, in public and and in private, and he too likes to paint me as some sort of lone crackpot, all his force directed towards me rather than the facing or acknowledging the legions of people making related (and distinct) skeptical points. For example. like Newton, Cowen rarely if ever mentions other prominent skeptics such as the scientists and researchers Emily Bender, Abeba Birhane, Kate Crawford, Subbarao Kambahapati, Melanie Mitchell, or Aravind Narayan, to name just a few; skeptical economists like Daron Acemoglu, Paul Krugman and Brad DeLong also get short shrift. (Last year Andrew Orlowski, a columnist for the UK Telegraph, described Cowen as being blinded by his passion for AI, after Cowen posted — and then deleted without public acknowledgement — a ChatGPT-fabricated quote from Francis Bacon).

Yesterday, Cowen truly jumped the shark, claiming triumphantly that OpenAI’s new release was “AGI, seriously”, singling out April 16, 2025 as some sort of historic “AGI Day”. Even Roose wouldn’t have made that argument:

This argument, is, to use the technical, academic term, “half-assed”. Cowen isn’t some 18-year-old kid finding his oats on Twitter; he is a tenured professor at a respected university. But he doesn’t even engage in any of the prior definitions that have been in the literature, not from people like Shane Legg and Ben Goertzel and Peter Voss who coined the term, nor from my own reconstructions. (When I pushed Cowen on this once, Cowen, famous for his lengthy research for his podcasts, complained that I write too much and that he couldn’t be bothered).

A serious academic does not just dismiss years of writing and thinking that opposes him (characterizing is as so much “blah. blah, blah”) in favor of “I know it when I see it”, without even the hint of a serious argument.

Serious scholars consider alternatives (the classic alternative here is, roughly, that AGI can do anything cognitive a human can) and evaluate them, perhaps arguing against them. That would be fine. Instead Cowen has become a bully, both lowering the bar of AGI (redefining it to be whatever he royally decrees it to be, “L’AGI, c’est moi”) and lowering the standard of debate. He has altogether lost sight of what it means to be an intellectual. He fails even to spell out an argument, critical to his post, that would show why o3 somehow, allegedly historically, met his personal bar where prior models evidently did not.

I know that a lot of people admire Cowen; I used to be one of them. He certainly has a creative, broad, and voracious mind; I liked his contrarian streak. But his writing on AI is terrible. It’s not just wrong; it’s weak and intellectually lazy.

His claims about o3 yesterday didn’t survive even the day.

By the time the day was done, everyone from Ethan Mollick (one of the most optimistic professionals writing about AI) to Colin Fraser (one of the most consistent skeptics) were already acknowledging problems.

Fraser posted samples of o3 math stupidity within minutes of o3 being available:

In a more systematic study of the addition of largish two digit integers, Fraser found performance of about 87%, not miserable, but far inferior to a zillion systems that would score a perfect 100% — yet business as usual for LLMs, but a sad commentary on how standards have fallen in the ChatGPT era.

A company called Transluce, which has been studying LLM chains of reasoning, was even more devastating, with a long thread that should be read in its entirety by anyone taking o3 seriously. It starts

and goes on to give many examples of disturbing patterns of hallucination and gaslighting.

Peter Voss, one of the three people to have coined the term AGI, had this to say about Cowen’s claims (responding to my claim that Cowen’s post wouldn’t age well):

Even AI fan/Wharton prof Ethan Mollick quickly acknowledged that o3 was not really AGI, or was at best “jagged” AGI, in a comment where he quoted Cowen’s remarks from above:

Mollick’s notion of “jagged” AI is exactly what I used to mean by the term “pointillisitic” and what I predicted about o3 in November. AI that works some of the time but not always, markedly better on some problems than others.

The jaggedness (or pointillism) comes because LLMs are regurgitation-with-minor-changes machines. When a particular prompt is close enough to a bunch of prior data points, LLMs do well; when they subtle differ from prior cases in their databases they often fail. Here’s how I put it in an essay here in September 2023 (echoed again in my essay on LLM failures of formal reasoning last year):

LLMs remain pointillistic masses of blurry memory, never as systematic as reasoning machines ought to be. .. What I mean by pointillistic is that what they answer very much depends on the precise details of what is asked and on what happens to be in the training set

Cowen has never been able to grasp (or acknowledge) that, presumably because he doesn’t want to.

As the wiser economist Brad DeLong just put it in a blunt essay, “if your large language model reminds you of a brain, it’s because you’re projecting—not because it’s thinking. It’s not reasoning, it’s interpolation. And anthropomorphizing the algorithm doesn’t make it smarter—it makes you dumber.”

Sad and disappointing to see Cowen decline from a thoughtful and independent voice in fields he knows well into what amounts to a slopganda-spewing industry shill striving for relevance in a field he doesn’t fully understand.

As I said at the top, in the subtitle, AI can only improve if its limits as well as its strengths are faced honestly.

Share

Gary Marcus wishes the pundit class would challenge industry rather than simply write PR.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCareer Gaps in Recruitment | Recruiting News Network
Next Article BBVA puts AI in the hands of every team with ChatGPT Enterprise
Advanced AI Bot
  • Website

Related Posts

AI literacy, hallucinations, and the law: A case study

May 24, 2025

Black Mirror was a warmup act

May 23, 2025

The “AI 2027” Scenario: How realistic is it?

May 22, 2025
Leave A Reply Cancel Reply

Latest Posts

Influential French Gallerist Daniel Lelong Dies at 92

Why Hollywood Stars Make Bank On Broadway—For Producers

New contemporary art museum to open in Slovenia

Curtain Up On 85 Years Of American Ballet Theatre

Latest Posts

AI Workflows Get New Open Source Tools to Advance Document Intelligence, Data Quality, and Decentralized AI with IBM’s Contribution of 3 projects to Linux Foundation AI and Data

June 5, 2025

A New Trick Could Block the Misuse of Open Source AI

June 5, 2025

Capybara caught by China zoo after two months on run

June 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.