Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Why Is Alibaba Stock Falling Thursday? – Alibaba Gr Hldgs (NYSE:BABA), Apple (NASDAQ:AAPL)

Read This Before You Buy the Dip on C3.ai as AI Stock Craters Post-Earnings

NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

Fact check: Google Lens’s AI overviews shared misleading information

By Advanced AI EditorJuly 14, 2017No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing our latest documentary, ‘The A Word’, which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Read more

This roundup of claims has been compiled by Full Fact, the UK’s largest fact checking charity working to find, expose and counter the harms of bad information.

The AI overviews of searches with Google Lens have been giving users false and misleading information about certain images being shared widely on social media, a Full Fact investigation has revealed.

This has happened for videos supposedly relating to the wars in Ukraine and Gaza, the India-Pakistan conflict, the June 2025 Air India plane crash and small boat arrivals in the UK.

Full Fact ran a number of searches for screenshots of key moments of misleading videos which we’ve fact checked in recent months using Google Lens, and found the AI overviews for at least 10 of these clips failed to recognise inauthentic content or otherwise shared false claims about what the images showed.

In four examples, the AI overviews repeated the false claims we saw shared with these clips on social media – claims which Full Fact has debunked. We also found AI overviews changed with each search, even when searching the same thing, so we often weren’t able to generate identical or consistent responses.

Google Lens is a visual search tool that analyses images – including stills from videos – and can surface similar pictures found online, as well as text or objects that relate to the image. According to Google, the AI overviews which sometimes appear at the top of Google Lens search results bring together “the most relevant information from across the web” about the image, including supporting links to related pages.

These AI overviews do have a note at the bottom saying: “AI responses may include mistakes”. This note links to a page that says: “While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes.”

When we asked Google about the errors we identified, a spokesperson said they were able to reproduce some of them, and that they were caused by problems with the visual search result, rather than the AI overviews themselves. They said the search results surface web sources and social media posts that combine the visual match with false information, which then informs the AI overview.

A Google spokesperson told us: “We aim to surface relevant, high quality information in all our Search features and we continue to raise the bar for quality with ongoing updates and improvements. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve and take appropriate action under our policies.”

They added that the AI overviews are backed by search results, and claimed they rarely “hallucinate”. Hallucination in this context refers to when a model generates false or conflicting information, often presented confidently, although there is some disagreement over the exact definition.

Even if AI overviews are not the source of the problem, as Google argues, they are still spreading false and misleading information on important and sensitive subjects.

Miscaptioned footage

We found several instances of AI overviews repeating claims debunked by Full Fact about real footage miscaptioned on social media.

For example, a viral video claimed to show asylum seekers arriving in Dover in the UK, but this isn’t true – it actually appears to show crowds of people on a beach in Goa, India. Despite this, the AI overview generated when we searched a still from this footage repeated the false claim, saying: “The image depicts a group of people gathered on Dover Beach, a pebble beach on the coast of England.”

Another clip circulated on social media with claims it showed the Air India plane that crashed in Ahmedabad, India, on June 12. The AI overview for a key frame similarly said: “The image shows an Air India Boeing 787 Dreamliner aircraft that crashed shortly after takeoff from Ahmedabad, India, on June 12, 2025, while en route to London Gatwick.” But this is false – the footage shows a plane taking off from Heathrow in May 2024.

Footage almost certainly generated with AI

In June, we wrote about a video shared on social media with claims it shows “destroyed Russian warplanes” following Ukraine’s drone attacks on Russian aircraft. But the clip is not real, and was almost certainly generated with artificial intelligence.

When searching multiple key frames from the footage with Google Lens, we were given several different AI overviews – none of which mentioned that the footage is not real and is likely to be AI-generated.

The overview given for one screenshot said: “The image shows two damaged warplanes, possibly Russian, on a paved surface. Recent reports indicate that multiple warplanes have exploded, including Russian aircraft that attacked a military base in Siberia.”

This overview supports the false claim circulating on social media that the video shows damaged Russian warplanes, and while it’s true that aircraft at Russia’s Belaya military base in Siberia were damaged in that Ukrainian attack, it doesn’t make sense to suggest that Russian aircraft attacked a military base in Siberia, which is mostly Russian.

AI overviews given for other screenshots of the clip wrongly claimed “the image shows the remains of several North American F-82 Twin Mustang aircraft”. F-82s were used by the US Air Force but were retired in 1953. They also had a distinct design, with parallel twin cockpits and single tail sections, which doesn’t match any of the planes depicted in the likely AI-generated video.

Footage from a video game

Gameplay footage from the military simulation game Arma 3 often circulates on social media with claims it shows genuine scenes from conflict.

We found several instances when Google Lens’s AI overviews failed to distinguish key frames of these clips from real footage, and instead appeared to hallucinate specific scenarios loosely relating to global events.

For example, one Arma 3 clip was shared online with false claims it showed Israeli helicopters being shot down over Gaza. When we searched a key frame with Google Lens, amid Israel-Iran air strikes following Israel’s attack on Iranian nuclear infrastructure in June, the AI overview said it showed “an Israeli Air Force (IAF) fighter jet deploying flares, likely during the recent strikes on Iran”. But the overview did not say that the footage is not real.

Another Arma 3 clip was shared amid conflict between India and Pakistan in May with false claims it showed Pakistan shooting down an Indian Air Force Rafale fighter jet near Bahawalpur in Pakistan.

The AI overview said the image showed “a Shenyang J-35A fighter jet, recently acquired by the Pakistan Air Force from China”. While there have been recent reports of Pakistan Air Force acquiring some of these Chinese fighter jets, this is not what the footage shows and the AI overview did not say it was from a video game.

Use with caution

Google Lens is an important tool and often the first thing fact checkers use when trying to verify footage, and we’ve encouraged the public to use it too. This makes the inaccuracy of Google Lens’s AI overviews concerning, especially given that the information features prominently at the top of people’s search results, meaning false or misleading claims could be the first thing people see.

Full disclosure: Full Fact has received funding from Google and Google.org, Google’s charitable foundation. You can see more details about the funding Full Fact receives here. We are editorially independent and our funders have no editorial control over our content.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleQuillBot: AI writing assistant for paraphrasing and plagiarism detection – how to use
Next Article Anthropic’s Claude AI can now search your Gmail inbox for you
Advanced AI Editor
  • Website

Related Posts

AI Mode in Google Search adds personalization, agentic features

August 21, 2025

Appian expands AI search capabilities with semantic search

August 20, 2025

How patients are using generative AI to find a doctor

August 19, 2025

Comments are closed.

Latest Posts

Tanya Bonakdar Gallery to Close Los Angeles Space

Ancient Silver Coins Suggest New History of Trading in Southeast Asia

Sasan Ghandehari Sues Christie’s Over Picasso Once Owned by a Criminal

Ancient Roman Villa in Sicily Reveals Mosaic of Flip-Flops

Latest Posts

Why Is Alibaba Stock Falling Thursday? – Alibaba Gr Hldgs (NYSE:BABA), Apple (NASDAQ:AAPL)

August 21, 2025

Read This Before You Buy the Dip on C3.ai as AI Stock Craters Post-Earnings

August 21, 2025

NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model – Takara TLDR

August 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Why Is Alibaba Stock Falling Thursday? – Alibaba Gr Hldgs (NYSE:BABA), Apple (NASDAQ:AAPL)
  • Read This Before You Buy the Dip on C3.ai as AI Stock Craters Post-Earnings
  • NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model – Takara TLDR
  • DeepSeek’s upgraded AI model absorbs reasoning feature in move towards ‘agent era’
  • How to Use Claude AI to Build High-Converting Landing Pages

Recent Comments

  1. Eugeneder on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Eugeneder on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. TimothyHiele on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Eugeneder on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. https://able2know.org/user/pin_up/ on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.