Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Perplexity Pro is free for Airtel users; How to claim Rs 17,000 Perplexity AI Pro access for FREE

Nvidia N1X CPU: Everything we know so far

MIT robot could help people with limited mobility dress themselves

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Finance AI

Grok’s ‘white genocide’ meltdown nods to the real dangers of the AI arms race

By Advanced AI EditorMay 20, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A version of this story appeared in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free here.

It’s been a full year since Google’s AI overview tool went viral for encouraging people to eat glue and put rocks on pizza. At the time, the mood around the coverage seemed to be: Oh, that silly AI is just hallucinating again.

A year later, AI engineers have solved hallucination problems and brought the world closer to their utopian vision of a society whose rough edges are being smoothed out by advances in machine learning as humans across the planet are brought together to…

Just kidding. It’s much worse now.

The problems posed by large language models are as obvious as they were last year, and the year before that, and the year before that. But product designers, backed by aggressive investors, have been busy finding new ways to shove the technology into more spheres of our online experience, so we’re finding all kinds of new pressure points — and rarely are they as fun or silly as Google’s rocks-on-pizza glitch.

Take Grok, the xAI model that is becoming almost as conspiracy-theory-addled as its creator, Elon Musk.

The bot last week devolved into a compulsive South African “white genocide” conspiracy theorist, injecting a tirade about violence against Afrikaners into unrelated conversations, like a roommate who just took up CrossFit or an uncle wondering if you’ve heard the good word about Bitcoin.

XAI blamed Grok’s unwanted rants on an unnamed “rogue employee” tinkering with Grok’s code in the extremely early morning hours. (As an aside in what is surely an unrelated matter, Musk was born and raised in South Africa and has argued that “white genocide” was committed in the nation — it wasn’t.)

Grok also cast doubt on the Department of Justice’s conclusion that ruled Jeffrey Epstein’s death a suicide by hanging, saying that the “official reports lack transparency.” The Musk bot also dabbled in Holocaust denial last week, as Rolling Stone’s Miles Klee reports. Grok said on X that it was “skeptical” of the consensus estimate among historians that 6 million Jews were murdered by the Nazis because “numbers can be manipulated for political narratives.”

Manipulated, you say? What, so someone with bad intentions could input their own views into a data set in order to advance a false narrative? Gee, Grok, that does seem like a real risk. (The irony here is that Musk, no fan of traditional media, has gone and made a machine that does the exact kind of bias-amplification and agenda-pushing he accuses journalists of doing.)

The Grok meltdown underscores some of the fundamental problems at the heart of AI development that tech companies have so far yada-yada-yada’d through anytime they’re pressed on questions of safety. (Last week, CNBC published a report citing more than a dozen AI professionals who say the industry has already moved on from the research and safety-testing phases and are dead-set on pushing more AI products to market as soon as possible.)

Let’s forget, for a moment, that so far every forced attempt to put AI chatbots into our existing tech has been a disaster, because even the baseline use cases for the tech are either very dull (like having a bot summarize your text messages, poorly) or extremely unreliable (like having a bot summarize your text messages, poorly).

First, there’s the “garbage in, garbage out” issue that skeptics have long warned about. Large language models like Grok and ChatGPT are trained on data vacuumed up indiscriminately from across the internet, with all its flaws and messy humanity baked in.

That’s a problem because even when nice-seeming CEOs go on TV and tell you that their products are just trying to help humanity flourish, they’re ignoring the fact that their products tend to amplify the biases of the engineers and designers that made them, and there are no internal mechanisms baked into the products to make sure they serve users, rather than their masters. (Human bias is a well-known problem that journalists have spent decades protecting against in news by building transparent processes around editing and fact-checking.)

But what happens when a bot is made without the best of intentions? What if someone whats to build a bot to promote a religious or political ideology, and that someone is more sophisticated than whoever that “rogue employee” was who got under the hood at xAI last week?

“Sooner or later, powerful people are going to use LLMs to shape your ideas,” AI researcher Gary Marcus wrote in a Substack post about Grok last week. “Should we be worried? Hell, yeah.”

For more CNN news and newsletters create an account at CNN.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe Spirit Of Rene Ricard Lives On In A New Collection Of Rugs
Next Article Self-replicating Python code | Quine
Advanced AI Editor
  • Website

Related Posts

Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines

July 18, 2025

Trump targets ‘woke’ AI in diversity crackdown

July 18, 2025

Supply Chain AI Symposium to feature execs from Augment, project44, GenLogs, and HappyRobot

July 16, 2025
Leave A Reply

Latest Posts

Sam Gilliam Foundation, David Kordansky Sued Over ‘Disavowed’ Painting

Donors Reportedly Pulling Support from Florida University Museum after its Controversial Transfer

What will come of the Guggenheim Asher legal battle?

Painter Says DHS Stole His Work for Post About ‘Homeland’s Heritage’

Latest Posts

Perplexity Pro is free for Airtel users; How to claim Rs 17,000 Perplexity AI Pro access for FREE

July 20, 2025

Nvidia N1X CPU: Everything we know so far

July 20, 2025

MIT robot could help people with limited mobility dress themselves

July 20, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Perplexity Pro is free for Airtel users; How to claim Rs 17,000 Perplexity AI Pro access for FREE
  • Nvidia N1X CPU: Everything we know so far
  • MIT robot could help people with limited mobility dress themselves
  • Adobe Firefly’s New AI Tool Generates Sound Effects from Voice and Text
  • New ARC-AGI-3 benchmark shows that humans still outperform LLMs at pretty basic thinking

Recent Comments

  1. aviator game review on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. registro de Binance US on A Heuristic Algorithm Based on Beam Search and Iterated Local Search for the Maritime Inventory Routing Problem
  3. Наручные часы Ролекс Субмаринер приобрести on Orange County Museum of Art Discusses Merger with UC Irvine
  4. Best SEO Backlinks on From silicon to sentience: The legacy guiding AI’s next frontier and human cognitive migration
  5. Register on Paper page – Solve-Detect-Verify: Inference-Time Scaling with Flexible Generative Verifier

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.