Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Floating Point Precision Affects AI Model Training Effectiveness_the_number_of

Promoting the Implementation of Artificial Intelligence Technology in Manufacturing Scenarios_as_its_Tuo

Stability AI launches Stable audio 2.5 to create instant enterprise soundtracks

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

Charlie Kirk’s death shows limits of AI chatbots for breaking news

By Advanced AI EditorSeptember 12, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


It took mere hours for the internet to spin out on conspiracies about the murder of Charlie Kirk — who died yesterday after being shot at a public event in Utah — according to reports.

The far-right commentator, who often engaged in vitriolic debates about immigration, gun control, and abortion on college campuses, was killed while on a university tour with his conservative media group, Turning Point USA. The organization has spent the last decade building conservative youth coalitions at top universities and has become closely affiliated with the nationalist MAGA movement and President Trump. As early reports of the incident rolled in from both reputed news agencies and pop culture update accounts, it was unclear if Kirk was alive or if his shooter had been apprehended.

SEE ALSO:

How to stop videos from autoplaying on X

But internet sleuths on both sides of the political aisle were already mounting for battle on social media, trying to identify the names of individuals in the crowd and attempting keyboard forensic science as they zoomed in closer and closer on the graphic video of Kirk being shot. Some alleged that Kirk’s bodyguards were trading hand signals right before the shot rang out. Others claimed the killing was actually a cover-up to distract from Trump’s unearthed communications with deceased sex trafficker Jeffrey Epstein.

Exacerbating the matter were AI-powered chatbots, which have taken over social media platforms both as integrated robotic helpers and as AI spam accounts that automatically reply to exasperated users.

This Tweet is currently unavailable. It might be loading or has been removed.

In one example, according to media and misinformation watchdog NewsGuard, an X account named @AskPerplexity, seemingly affiliated with the AI company, told a user that its initial claim that Charlie Kirk had died was actually misinformation and that Kirk was alive. The reversal came after the user prompted the bot to explain how common sense gun reform could have saved Kirk’s life. The response has been removed since NewsGuard’s report was published.

“The Perplexity Bot account should not be confused with the Perplexity account,” a Perplexity spokesperson clarified in a statement to Mashable. “Accurate AI is the core technology we are building and central to the experience in all of our products.  Because we take the topic so seriously, Perplexity never claims to be 100% accurate. But we do claim to be the only AI company working on it relentlessly as our core focus.”

Elon Musk’s AI bot, Grok, erroneously confirmed to a user that the video was an edited “meme” video, after claiming that Kirk had “faced tougher crowds” in the past and would “survive this one easily.” The chatbot then doubled down, writing: “Charlie Kirk is debating, and effects make it look like he’s ‘shot’ mid-sentence for comedic effect. No actual harm; he’s fine and active as ever.” Security experts said at the time that the videos were authentic.

This Tweet is currently unavailable. It might be loading or has been removed.

In other cases NewsGuard documented, users shared chatbot responses to confirm their own conspiracies, including those claiming his assassination was planned by foreign actors and that his death was a hit by Democrats. One user shared an AI-generated Google response that claimed Kirk was on a hit list of perceived Ukrainian enemies. Grok told yet another X user that CNN, NYT, and Fox News had all confirmed a registered Democrat was seen at the crime and was a confirmed suspect — none of that was true.

“The vast majority of the queries seeking information on this topic return high quality and accurate responses. This specific AI Overview violated our policies and we are taking action to address the issue,” a Google spokesperson told Mashable.

Mashable Light Speed

Mashable also reached out to Grok parent company xAI for comment.

Chatbots can’t be trained as journalists

While AI assistants may be helpful for simple daily tasks — sending emails, making reservations, creating to-do lists — their weakness at reporting news is a liability for everyone, according to watchdogs and media leaders alike.

Algorithms don’t call for comment.
– McKenzie Sadeghi, NewsGuard

“We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?” asked Deborah Turness, the CEO of BBC News and Current Affairs, in a blog from earlier this year.

One problem is that chatbots just repeat what they’re told, according to the Newsguard report:

“The growing reliance on AI as a fact-checker during breaking news comes as major tech companies have scaled back investments in human fact-checkers, opting instead for community or AI-driven content moderation efforts.This shift leaves out the human element of calling local officials, checking firsthand documents and authenticating visuals, all verification tasks that AI cannot perform on its own.”

Additionally, while chatbots offer personal, isolated interactions, they are notoriously sycophantic, doing everything they can to please and confirm the beliefs of the user.

“Our research has found that when reliable reporting lags, chatbots tend to provide confident but inaccurate answers,” explained McKenzie Sadeghi, NewsGuard researcher and author of the aforementioned analysis. “During previous breaking news events, such as the assassination attempt against Donald Trump last year, chatbots would inform users that they did not have access to real-time, up-to-date information.” But since then, she explained, AI companies have leveled up their bots, including affording them access to real-time news as it happens.

This Tweet is currently unavailable. It might be loading or has been removed.

“Instead of declining to answer, models now pull from whatever information is available online at the given moment, including low-engagement websites, social posts, and AI-generated content farms seeded by malign actors. As a result, chatbots repeat and validate false claims during high-risk, fast-moving events,” she said. “Algorithms don’t call for comment.”

Sadeghi explained that chatbots prioritize the loudest voices in the room, instead of the correct ones. Pieces of information that are more frequently repeated are granted consensus and authority by the bot’s algorithm, “allowing falsehoods to drown out the limited available authoritative reporting.”

The Brennan Center for Justice at NYU, a nonpartisan law and policy institute, also tracks AI’s role in news gathering. The organization has raised similar alarms about the impact of generative AI on news literacy, including its role in empowering what is known as the “Liar’s Dividend” — or the benefits gained by individuals who stoke confusion by claiming real information is false. Such “liars” contend that truth is impossible to determine because, as many now argue, any image or video can be created by generative AI.

Even with the inherent risks, more individuals have turned to generative AI for news as companies continue ingraining the tech into social media feeds and search engines. According to a Pew Research survey, individuals who encountered AI-generated search results were less likely to click on additional sources than those who used traditional search engines. Meanwhile, major tech companies have scaled back their human fact-checking teams in favor of community-monitored notes, despite widespread concerns about growing misinformation and AI’s impact on news and politics. In July, X announced it was piloting a program that would allow chatbots to generate their own community notes.

Topics
Social Good
Social Media



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCiti snags AI head from IBM
Next Article Tesla bear turns bullish for two reasons as stock continues boost
Advanced AI Editor
  • Website

Related Posts

AI fuels false claims after Charlie Kirk’s death, CBS News analysis reveals

September 13, 2025

AI search optimization? GEO? SEOs can’t agree on a name: Survey

September 11, 2025

How search engines power ChatGPT, Gemini and more

September 11, 2025

Comments are closed.

Latest Posts

Ohio Auction of Two Paintings Looted By Nazis Halted By Foundation

Lee Ufan Painting at Center of Bribery Investigation in Korea

Drought Reveals 40 Ancient Tombs in Northern Iraqi Reservoir

Artifacts Removed from Gaza Building Before Suspected Israeli Strike

Latest Posts

Floating Point Precision Affects AI Model Training Effectiveness_the_number_of

September 13, 2025

Promoting the Implementation of Artificial Intelligence Technology in Manufacturing Scenarios_as_its_Tuo

September 13, 2025

Stability AI launches Stable audio 2.5 to create instant enterprise soundtracks

September 13, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Floating Point Precision Affects AI Model Training Effectiveness_the_number_of
  • Promoting the Implementation of Artificial Intelligence Technology in Manufacturing Scenarios_as_its_Tuo
  • Stability AI launches Stable audio 2.5 to create instant enterprise soundtracks
  • **AI Science Popularization** Opens a New Chapter in Intelligent Services, Empowering Educational Innovation_the_of_This
  • Google DeepMind Uses AI to Detect Gravitational Waves, Featured in Science_the_noise_team

Recent Comments

  1. zaym onlayn 194 on MIT’s Xstrings facilitates 3D printing parts with embedded actuation | VoxelMatters
  2. zaym onlayn 247 on [2411.00863] Next-Token Prediction Task Assumes Optimal Data Ordering for LLM Training in Proof Generation
  3. zaym onlayn 289 on A CNN-based Local-Global Self-Attention via Averaged Window Embeddings for Hierarchical ECG Analysis
  4. HarryMoF on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Richardsmeap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.