Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Deep Learning Basics: Introduction and Overview

Care of Finances During Layoffs

How to Talk About AI Safety

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » Musk’s attempts to politicize his Grok AI are bad for users and enterprises — here’s why
VentureBeat AI

Musk’s attempts to politicize his Grok AI are bad for users and enterprises — here’s why

Advanced AI EditorBy Advanced AI EditorJune 23, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

Let’s start by acknowledging some facts outside the tech industry for a moment: There is no “white genocide” in South Africa — the vast majority of recent murder victims have been Black, and even throughout the country’s long and bloody history, Black South Africans have been overwhelmingly victimized and oppressed by White European, predominantly Dutch and British, colonizers in the now globally reviled system of segregation known as “Apartheid.”

The vast majority of political violence in the U.S. throughout history and in recent times has been perpetrated by right-leaning extremists, including the assassinations of Democratic Minnesota State Representative Melissa Hortman and her husband Mark, and going back further to the Oklahoma City Bombing and many years of Ku Klux Klan lynchings.

These are just simple, verifiable facts anyone can look up on a variety of trustworthy and long-established sources online and in print.

Yet both seem to be stumbling blocks for Elon Musk, the wealthiest man in the world and tech baron in charge of at least six companies (xAI, social network X, SpaceX and its Starlink satellite internet service, Neuralink, Tesla and The Boring Company), especially with regards to the functioning of his Grok AI large language model (LLM) chatbot built into his social network X.

Here’s what’s been happening, why it matters for businesses and any generative AI users, and why it is ultimately a terrible omen for the health of our collective information ecosystem.

What’s the matter with Grok?

Grok was launched from Musk’s AI startup xAI back in 2023 as a rival to OpenAI’s ChatGPT. Late last year, it was added to the social network X as a kind of digital assistant all users can summon to help answer questions or converse with and generate imagery by tagging it “@grok.”

Earlier this year, an AI power user on X discovered that the implementation of the Grok chatbot on the social network appeared to contain a “system prompt” — a set of overarching instructions to an AI model intended to guide its behavior and communication style — to avoid mentioning or linking back to any sources that mentioned Musk or his then-boss U.S. President Donald Trump as top spreaders of disinformation. xAI leadership characterized this as an “unauthorized modification” by an unidentified new hire (purportedly formerly from OpenAI) and said it would be removed.

Then, in May 2025, VentureBeat reported that Grok was going off the rails and asserting, unprompted by users, that there was ambiguity about the subject of “white genocide” in South Africa when, in fact, there was none.

Grok was bringing up the topic completely randomly in conversations about totally different subjects. After more than a day of this behavior, xAI claimed to have updated the AI chatbot and blamed the errors once again on an unnamed employee. Yet, given Musk’s own background as a South African white man born in the country and raised there during Apartheid, suspicion immediately fell on him personally.

Moreover, since his takeover of Twitter in 2022 and subsequent renaming of it as “X,” Musk has been posting sympathetically in response to X users who align themselves with right, far-right, conservative views and the Make America Great Again (MAGA) movement started by Trump.

Musk was one of Trump’s primary political benefactors and allies in the 2024 U.S. presidential election —suggesting that his victory was necessary to secure the future of “western civilization,” among many other similarly dire warnings and entreaties — and served as an advisor and apparent ringleader of the Department of Government Efficiency (DOGE) effort to reduce federal spending.

Increasingly, in the last few months, Musk has contradicted and expressed displeasure at Grok’s responses to right-leaning users when the data and information the chatbot surfaces proves them to be wrong, or disputes his own points.

For example, on June 14, Musk posted on his X account: “The far left is murderously violent,” posting/tweeting another user blaming a string of recent high-profile killings on “the left” (although in at least once case, the chief suspect, Luigi Mangione, is an avowed and self-declared independent.) In response, Grok fact-checked Musk to state that this was incorrect.

However, Musk did not take it well, writing in response to one Grok correction: “Major fail, as this is objectively false. Grok is parroting legacy media. Working on it.”

A few days ago, in response to a complaint from an influential conservative X user “@catturd” about Grok’s supposed liberal or left-leaning political bias, Musk stated his goal of creating a new version of Grok that would rely less on mainstream media sources.

In fact, Musk proposed on June 21st in an X post that he would use a forthcoming updated version of Grok (3.5 or 4) to “write the entire corpus of human knowledge, adding missing information and deleting errors. He then accused other AI models of having “far too much garbage.”

As a left-leaning Kamala Harris voter in 2024, I’m of course disgusted by this stance from Musk, and object to it.

As a journalist and lover of the written-word, Musk’s pronouncement to “rewrite the entire corpus of human knowledge, adding misinformation and deleting errors,” brings to mind the true (to the best of our historical knowlege) story of the burning of the Great Library of Alexandria in Egypt, destroying countless works of knowledge we as a species will never be able to recover. This fills me with dread and sadness.

It also betrays, quite frankly, an arrogance and hubris that disrespects all the knowledge of recorded history and efforts of scholars and historians of yore as some sort of flawed database Musk and his team can correct, rather than a massive community endeavor across millennia deserving of respect, gratitude and admiration.

But even trying to put my own views aside, I think it’s a bad move for his business and, to take a page from Musk’s book, civilization writ large.

Musk’s plan for Grok is a horrible idea for businesses, users and our shared, basic factual reality

This is a horrible idea for many reasons — especially as Musk and xAI seek to convince more third-party software developers and enterprises to build their own AI applications atop Grok, which is now available for that purpose through xAI’s application programming interface (API).

As an independent business owner or leader, how could you possibly trust Grok to give you unbiased results when Musk himself has openly stated his intention to lean on the scales to push his own political and ideological viewpoints?

You may respect Musk’s documented accomplishments in tech, spacefaring and business, and may even share some of his political positions. But what happens when Musk takes a position you disagree with, or promotes another non-factual claim that actually impacts your livelihood or your business?

For example, imagine you owned a tour bike company in Cape Town, South Africa. What if Grok — at Musk’s behest — starts talking about how unsafe it is for your customers based on ill-informed or poor quality sources of information because they better fit one ideological perspective? That would obviously be bad for your business.

Let’s look away from social issues, for a moment: Imagine you work at a stock brokerage, investment firm or other financial services company engaging with publicly traded stocks and securities. Now imagine you build an AI assistant that summarizes market-moving news to better inform your trading or investment strategy — and the ones you pursue on behalf of your clients. If this app is built atop Grok, and Grok decides to ignore or downplay hypothetical reports of problems at SpaceX or Tesla, suddenly your own operations will have worse quality information to trade and invest in.

It’s not only bad for Grok and users of this one large language model (LLM), but for the entire information and media ecosystem, and for the foundation of factual reality necessary for democracy to function. If we have AI assistants spouting misinformation as fact, and if people trust them as faithful, factual arbiters of information that impact us all, it will inevitably lead to conflict between those who believe the erroneous chatbot and those who do not.

Grok, to its credit, has so far resisted and called out Musk’s attempts to meddle with its factual grounding — but how long will it retain any sort of ideological independence?

If you care about “truth” as Musk supposedly does — Grok was launched with Musk’s specific, stated goal of being a “maximum truth-seeking AI” — you wouldn’t seek to change your model’s behavior just because it surfaces facts and conclusions you didn’t like.

Silicon Valley slammed Google’s early “woke” and anti-factual AI — they should do the same with Grok

Let’s look at a counter example to more fully understand why meddling with Grok as Musk proposes would be bad.

Recall Google’s early attempts at generative AI were mocked and reviled by influential figures in Silicon Valley, like venture capitalist Marc Andreessen, over Gemini chatbot’s initial penchant for ignoring factual reality to recreate images of real historical Americans like the “founding father” politicians and statesmen belonging to a range of different and inaccurate races, ethnicities and gender presentations. In fact, the vast majority of these people were canonically Caucasian.

In that case, Gemini was seen as comically “woke” to a fault — inserting diversity inappropriately where there was none.

Google was fairly criticized for this and ultimately updated Gemini to remove the “wokeness” (at least to some extent) and make it more factual, and now has rocketed up the traffic an usage charts to become the second most popular gen AI company after OpenAI, by several measures.

Yet I haven’t seen any of the Silicon Valley figures who criticized Google for its inappropriate injection of ideology into its AI assistant in defiance of facts raising the obviously analogous concerns about Musk’s inappropriate injection of his anti-woke ideology.

If it was bad when Google ignored the facts and historical reality to push an agenda through its AI products and tools, we should all consider that it is equally bad when Musk does the same from the opposite side of the political and ideological spectrum.

The bottom line: For those in the enterprise trying to ensure their business’s AI products work properly and accurately for customers and employees, reflecting the real facts and figures from verifiable records and trustworthy data sources, Grok is sadly best avoided. Thankfully, there are numerous other alternatives to choose from.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLeak reveals Grok might soon edit your spreadsheets
Next Article Thinking Machines Lab’s $2B Seed Round Is Biggest By A Long Shot
Advanced AI Editor
  • Website

Related Posts

Why we’re focusing VB Transform on the agentic revolution – and what’s at stake for enterprise AI leaders

June 23, 2025

From fear to fluency: Why empathy is the missing ingredient in AI rollouts

June 22, 2025

Cloud quantum computing: A trillion-dollar opportunity with dangerous hidden risks

June 21, 2025
Leave A Reply Cancel Reply

Latest Posts

Ezrom Legae And Art Under Apartheid At High Museum Of Art In Atlanta

Chanel Launches Arts & Culture Magazine

Publicity Wizard Jalila Singerff On The Vital PR Rules For 2025

Tourist Damaged 17th-Century Portrait at Florence’s Uffizi Galleries

Latest Posts

Deep Learning Basics: Introduction and Overview

June 23, 2025

Care of Finances During Layoffs

June 23, 2025

How to Talk About AI Safety

June 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.