It’s no secret that Google has been going all-in with artificial intelligence. The Mountain View-based giant has been racing to roll out new and more powerful AI models and already has multiple AI-powered tools like NotebookLM. Google is also constantly working on experimental AI projects, the most recent being an app called Google AI Edge Gallery, which lets you run powerful AI models without requiring a Wi-Fi connection or cellular data.
Related
Google’s best improvements are the ones you’ll never see coming
Google proves that the heart of a great phone beats inside the software
Google’s AI Overviews don’t need much introduction and have been around for more than a year now. Despite that, these overviews keep embarrassing the company and are all the proof anyone needs that AI isn’t quite ready to take over the world just yet. Its most recent blunder? Telling people it’s still 2024 with utmost confidence when asked what year it is, even though we’re already six months deep into 2025!
Google fixed the problematic AI Overview, but the issues aren’t over
Earlier this week, a user posted on the r/google subreddit that asking “is it 2025” gave them the following AI Overview:
No, it is not 2025. The current year is 2024, as of today, May 27, 2024. The year 2025 is in the future, and the current date is May 27, 2025.
Users in the comments shared even more screenshots, and though the wording in their AI Overviews was slightly different, the answer buried within was all the same. For instance, a user got the following AI Overview where it first stated that it isn’t 2025, and then contradicted itself by saying the current year is 2025. Here’s the exact response, which I got too, coincidentally:
Yeah, AI Overviews are clearly intelligent. Despite being six months deep into the year, we’ve all collectively definitely entered 2025 due to time zone differences! Google eventually fixed the answer, and the AI Overview began responding that we’re indeed living in the year 2025. A Google spokesperson shared the following statement with the folks over at Android Authority:
As with all Search features, we rigorously make improvements and use examples like this to update our systems. The vast majority of AI Overviews provide helpful, factual information and we’re actively working on an update to address this type of issue.
Though Google quickly jumped on this and rolled out a fix, it’s just another reason it needs to slow down on its AI development. And for us, it’s another reminder that AI simply cannot be trusted. AI Overviews pull their answers from sources online, and it seems not to do the best job at even that. For instance, in the example above, it kept citing Wikipedia as a source! And this is far from the first time a result from AI Overviews has gone viral for all the wrong reasons like this.
Last year in May, Google’s AI Overviews told people to eat glue and rocks, and even recommended adding glue to pizza. Just a few weeks ago, when anyone attempted to type a completely random sentence that sounded like it could be an idiom into Google Search, an AI Overview would pop up and explain the sentence as if it were an actual idiom.
Android Police’s former Phones Editor, Will Sattelberg, posted a thread sharing his experience when he tried searching for a synonym for the word mania that started with the letter T. Instead of suggesting a synonym that fit the criteria he described or telling him that no such synonym exists (which it does), the AI Overview took it one step further and gave him the perfect synonym: frenzy.
Clearly starts with a “T,” great job Google! So, take this as a friendly reminder folks, don’t trust AI, especially not AI Overviews.