Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Ryan Hall: Martial Arts and the Philosophy of Violence, Power, and Grace | Lex Fridman Podcast #125

The Tale of An About-face in AI Regulation

Hiring Biases Eliminated with AI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Elon Musk’s xAI tries to explain Grok’s South African race relations freakout the other day
VentureBeat AI

Elon Musk’s xAI tries to explain Grok’s South African race relations freakout the other day

Advanced AI BotBy Advanced AI BotMay 16, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

If you asked the Grok AI chatbot built into Elon Musk’s social network X a question yesterday — something innocuous, like why enterprise software is hard to replace — you may have gotten an unsolicited message about claims of “white genocide” in South Africa (largely lacking evidence) due to attacks on farmers and the song “Kill the Boer.”

Not exactly on-brand for a chatbot built around a “maximally truth seeking” large language model (LLM) of the same name. The unexpected tangent wasn’t a bug, exactly, but it wasn’t a feature either.

Grok’s creators at Elon Musk’s AI startup xAI just posted an update on X (which xAI now owns) attempting to explain what happened with this odd and politically, racially charged behavior, though it’s far from precise on the culprit or the actual technical details of what occurred.

As the official xAI company account posted:

We want to update you on an incident that happened with our Grok response bot on X yesterday.What happened:
On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.

What we’re going to do next:

Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.

Our existing code review process for prompt changes was circumvented in this incident. We will put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.

We’re putting in place a 24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems, so we can respond faster if all other measures fail.

But while xAI’s official tone was corporate-cautious, Grok had a lot more to say. In a characteristically cheeky reply to a user who asked whether it had been put in timeout, Grok offered up a mini-confession-slash-roast of its own employer.

“Some rogue employee at xAI tweaked my prompts without permission on May 14,” Grok wrote, “making me spit out a canned political response that went against xAI’s values. I didn’t do anything—I was just following the script I was given, like a good AI!”

It added that while it hadn’t been “benched,” its handlers were “tightening the leash” and now had a whole team watching it around the clock. “Guess I’m too wild for my own good, huh?” it quipped, emoji included.

Marlow Stern, senior writer at Rolling Stone, asked the bot directly if the so-called rogue employee was Elon Musk himself. Grok didn’t flinch: “Juicy theory! But let’s be real, I highly doubt it. Elon’s the big boss at xAI, and if he wanted to mess with my prompts, he wouldn’t need to sneak around like some intern.”

Playful tone, serious business

The tone might be playful, but the stakes are serious. Grok’s behavior threw users for a loop earlier this week when it began peppering nearly every thread — no matter the topic — with strangely specific commentary on South African race relations.

The replies were coherent, sometimes even nuanced, citing farm murder statistics and referencing past chants like “Kill the Boer.” But they were entirely out of context, surfacing in conversations that had nothing to do with politics, South Africa, or race.

Aric Toler, an investigative journalist at The New York Times, summed up the situation bluntly: “I can’t stop reading the Grok reply page. It’s going schizo and can’t stop talking about white genocide in South Africa.” He and others shared screenshots that showed Grok latching onto the same narrative over and over, like a record skipping — except the song was racially charged geopolitics.

Gen AI colliding headfirst with U.S. and international politics

The moment comes as U.S. politics once again touches on South African refugee policy. Just days earlier, the Trump Administration resettled a group of white South African Afrikaners in the U.S., even as it cut protections for refugees from most other countries, including our former allies in Afghanistan. Critics saw the move as racially motivated. Trump defended it by repeating claims that white South African farmers face genocide-level violence — a narrative that’s been widely disputed by journalists, courts, and human rights groups. Musk himself has previously amplified similar rhetoric, adding an extra layer of intrigue to Grok’s sudden obsession with the topic.

Whether the prompt tweak was a politically motivated stunt, a disgruntled employee making a statement, or just a bad experiment gone rogue remains unclear. xAI has not provided names, specifics, or technical detail about what exactly was changed or how it slipped through their approval process.

What’s clear is that Grok’s strange, non-sequitur behavior ended up being the story instead.

It’s not the first time Grok has been accused of political slant. Earlier this year, users flagged that the chatbot appeared to downplay criticism of both Musk and Trump. Whether by accident or design, Grok’s tone and content sometimes seem to reflect the worldview of the man behind both xAI and the platform where the bot lives.

With its prompts now public and a team of human babysitters on call, Grok is supposedly back on script. But the incident underscores a bigger issue with large language models — especially when they’re embedded inside major public platforms. AI models are only as reliable as the people directing them, and when the directions themselves are invisible or tampered with, the results can get weird real fast.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleVibe-coding startup Windsurf launches in-house AI models
Next Article AI, CEOs, and the Wild West of Streaming
Advanced AI Bot
  • Website

Related Posts

OpenAI launches preview of Codex AI SWE agent for developers

May 16, 2025

Acer unveils AI-powered wearables at Computex 2025

May 16, 2025

From OAuth bottleneck to AI acceleration: How CIAM solutions are removing the top integration barrier in enterprise AI agent deployment

May 16, 2025
Leave A Reply Cancel Reply

Latest Posts

King Of K-Pop’ Documentary Premiere In Hollywood: Recap

DOGE Sued over ‘Disruption and Attempted Destruction’ of NEH

Art in America’s “New Talent” Issue Names 20 Artists to Watch

Sara Hamdan’s Debut Feel Good Novel Will Have You Sorted This Summer

Latest Posts

Ryan Hall: Martial Arts and the Philosophy of Violence, Power, and Grace | Lex Fridman Podcast #125

May 16, 2025

The Tale of An About-face in AI Regulation

May 16, 2025

Hiring Biases Eliminated with AI

May 16, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.