Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking

Apple Reportedly Pondering Perplexity AI Purchase

Where Will C3.ai Stock Be in 3 Years?

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » 1984, but with LLM’s – by Gary Marcus
Gary Marcus

1984, but with LLM’s – by Gary Marcus

Advanced AI EditorBy Advanced AI EditorJune 21, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


“Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’”

— 1984, George Orwell

In February, I wrote an essay here called Elon Musk’s terrifying vision for AI, warning that the “the richest man in the world [was trying to build] a Large Language Model that spouts propaganda in his image”.

But he couldn’t get the vision to work. He had hoped Grok would say stuff like this, echoing his own beliefs.

But the Grok output in the tweet here appears in hindsight to have been Elon’s fantasy, rather than Grok’s reality. Nobody could actually replicate it. Here’s what I got from Grok 3 this morning:

This output comes much closer to the consensus view, rather than Elon’s view.

It turns out that, thus far, the major LLMs haven’t been that different from one another, as multiple studies have shown. Almost every LLM, even his own, could be argued, for example, to have a slight liberal bias. Probably none of them would (without special prompting) rant on The Information the way that Elon did in February.

No LLM is fully coherent in it’s “beliefs”, because by nature they parrot a variety conflicting things from their training, which is itself inconsistent, rather than reasoning their way a coherent view the way a rational person might. But other things being equal their “opinions” tend to be bland and middle of the road.

Why do so many models tend to cluster not far from the middle? Necessity has (heretofore) meant that pretty much every recent LLM has been trained on more less the same data. (Why? LLMs are desperately greedy for data, so pretty much everyone has used every bit of data they can scrape from the internet, and there is only one internet.)

Short of using some fine-tuning tricks that likely reduce accuracy, it’s actually pretty hard to make an LLM that stands out from the pack – if you use the same data as everyone else. Grok is maybe a little closer to the center, but hardly as far right as Mr. Musk himself is.

Unfortunately, Musk seems finally to have figured this out. A bit closer to the center isn’t going to satisfy him, so he hit on a different technique to achieve the effect that he wants. He is going to rewrite the data:

Which is to say he is going to rewrite history. Straight out of 1984.

He couldn’t get Grok to align with his own personal beliefs so he is going to rewrite history to make it conform to his views.

§

Elon Musk couldn’t control Washington, so now he is going to try to control your mind instead.

If you don’t think Altman and Zuckerberg are going to try to the same, you aren’t paying attention. Together with Jony Ive, Altman is apparently aiming to build a smart necklace or similar, presumably with 24/7 access to everything you say — and to make his LLMs, which he can shape as he pleases, your constant companion. Surveillance tools Orwell himself barely dreamed of.

Zuckerberg, meanwhile, is spending his billions trying to make AI to his own liking (and trying his best to hire Altman’s best staff away, apparently offering $100 million annual salaries to some). The Chinese government has already made conformity to their Party line a legal requirement; Putin is probably on it, and it’s not inconceivable that the present US government might pressure companies to do the same.

LLMs may not be AGI, but they could easily become the most potent form of mind control ever invented.

Gary Marcus is terrified of what is to come, and doesn’t know whether democracy as we knew it will survive.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleEU Commission: “AI Gigafactories” to strengthen Europe as a business location
Next Article Kai-Fu Lee: AI Superpowers – China and Silicon Valley | Lex Fridman Podcast #27
Advanced AI Editor
  • Website

Related Posts

Five quick updates about that Apple reasoning paper that people can’t stop talking about

June 17, 2025

Seven replies to the viral Apple reasoning paper – and why they fall short

June 12, 2025

A knockout blow for LLMs? – by Gary Marcus

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

What ‘The Lost Albums’ Say About Bruce Springsteen

Real-Life Matchmaker Lauren Daddis Talks Accuracy Of ‘Materialists’

An Apartment By One Of Mexico’s Buzziest Designers Is Open To Book In San Miguel

Songtsam Resorts Launch Collaboration Inspired By Tibet’s Sacred Lake

Latest Posts

Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking

June 21, 2025

Apple Reportedly Pondering Perplexity AI Purchase

June 21, 2025

Where Will C3.ai Stock Be in 3 Years?

June 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.