Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others

Ted Cruz AI bill could let firms bribe Trump to avoid safety laws, critics warn

AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Gary Marcus

“Our language models are so ‘conscious’ we need to give them rights”

By Advanced AI EditorApril 25, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


My DM’s this morning were filled with journalists and friends asking me about a story that increasingly dubious New York Times ran yesterday. This one:

I am not going to read Kevin’s column, and I don’t think you need to, either. What he wrote about coding was wildly naive, and he couldn’t be bothered to ask whether the system would even extend to Pac-Man , let alone debugging. His near-religious endorsement of the imminence of AGI kind of speaks for itself, as does his apparent aversion to consulting seriously with experts who might disagree with his panglossian takes. His shtick is write to with awe, and to think uncritically; I have honestly had enough.

§

That said, we have seen this movie before. The last time I wrote it about, in June 2022, I called it nonsense on stilts.

For those who have forgotten the story, an AI safety engineer at Google, Blake Lemoine, felt that an LLM-based system that nobody remembers anymore called Lamda had achieved “sentience”. The eternally-sharp Abeba Birhane nailed it then with a tweet that she could equally repost today, in Roose’s honor:

The essence of my own argument then, back in June 2022, applies as much to today’s LLM’s as those of three years ago:

To be sentient is to be aware of yourself in the world; LaMDA simply isn’t. It’s just an illusion, in the grand history of ELIZA a 1965 piece of software that pretended to be a therapist (managing to fool some humans into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test. None of the software in either of those systems has survived in modern efforts at “artificial general intelligence”, and I am not sure that LaMDA and its cousins will play any important role in the future of AI, either. What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.

I am not saying that no software ever could connects its digital bits to the world, a la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine.

Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critique systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another.

Search and replace LaMDA with Claude, and it all still applies. I still don’t remotely see an argument that current models are sentient, nor any argument that scaling a model makes it more conscious, even if it can mimic more language from humans discussing consciousness. Claude does what LaMDA does better because it has more data, but I don’t see any really argument that Claude is any more sentient than a web browser.

Eric Byrnjolffson is often more bullish on AI than I am, but his 2022 commentary on the whole LaMDA affair, too, could be reposted today without changing a word:

Sad that The New York Times fell for it.

§

You can look at what Anthropic is doing (evaluating the “welfare” of its models) from the standpoint of the philosophy of consciousness (asking very reasonably questions like what would count as consciousness? and how we would measure it in an animal or a machine?, and so on), but I think it is better to look at what is happening from the perspective of of commerce. Anthropic is a business (which incidentally neglects to respect the rights of artist and writers who work they nick). I suspect the real move here is simply, as it so often is, to hype the product — basically by saying, hey, look at how smart our product is, it’s so smart we need to give it rights.

Just wait ‘til you see our spreadsheets!

Gary Marcus is shaking his head.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle Earnings Report, Ranking Volatility, AI Overviews Harm Clicks, Search Ads Grow & More
Next Article OpenAI DevDay: Keynote Recap
Advanced AI Editor
  • Website

Related Posts

Peak bubble – by Gary Marcus

September 11, 2025

OpenAI’s future, foretold? – by Gary Marcus

September 7, 2025

AI is going pretty much as I expected

September 5, 2025
Leave A Reply

Latest Posts

Sally Mann Says Her Black Men Photos Are ‘Problematic’ in Hindsight

National Gallery and Tate Have ‘Bad Blood’—and More Art News

Christie’s Will Auction The First Calculating Machine In History

The Art Market Isn’t Dying. The Way We Write About It Might Be.

Latest Posts

FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others

September 11, 2025

Ted Cruz AI bill could let firms bribe Trump to avoid safety laws, critics warn

September 11, 2025

AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning – Takara TLDR

September 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others
  • Ted Cruz AI bill could let firms bribe Trump to avoid safety laws, critics warn
  • AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning – Takara TLDR
  • China reportedly discouraged purchase of NVIDIA AI chips due to ‘insulting’ Lutnick statements
  • Tool-space interference in the MCP era: Designing for agent compatibility at scale

Recent Comments

  1. Michaelcep on Foundation AI: Cisco launches AI model for integration in security applications
  2. 📁 🎁 Special Offer - 0.4 BTC gift waiting. Claim today >> https://graph.org/Get-your-BTC-09-04?hs=3360d6a9326bcfdc0192c5fc3124d2b8& 📁 on Building OpenAI o1
  3. 📕 🎉 Special Offer - 0.4 BTC bonus available. Claim today >> https://graph.org/Get-your-BTC-09-04?hs=208ceee3f6097cd6fb1bf64c97bed29a& 📕 on Baidu AI drive to boost jobs
  4. RichardBub on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. JamesMub on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.