DMR News

Advancing Digital Conversations

Google AI Makes Up Fake Sayings and Claims They’re Real and What That Means for Search Accuracy

ByHilary Ong

Apr 28, 2025

Google AI Makes Up Fake Sayings and Claims They’re Real and What That Means for Search Accuracy

Google’s AI Overviews attacks on legitimate uses provokes outcry from users and Reuters. Instead, it regularly serves up erroneous search results, which cause consumers confusion and aggravation. During that time, many have labeled the technology as experimental. Even the firm itself has acknowledged that it is ultimately impossible for them to keep it accurate. This is very concerning given the increasing adoption of AI-augmented content into mainstream search experiences, and more broadly search experiences we all engage with regularly.

So the AI system just predicts the next most likely word. It relies on millions of words of training data to inform these predictions. This current approach has resulted in conflicting testimony, with most users claiming drastically different results when tested. Gary Marcus, one of the foremost experts on artificial intelligence, recently offered this striking analysis. He discovered that only five minutes of testing revealed a shocking amount of errors.

Experimental Nature of Google AI Overviews

Until then, Google has admitted outright that its AI Overviews feature is still in an experimental phase. Such an admission is welcome, given the growing paper trail illustrating that the system is not always performing in the way users expect it to. Generative AI’s lack of predictability, known as the black box problem, has already raised enough alarm. It usually misrepresents dissenting views and fails to account for rare expertise.

Also, as explained by our experts, the impact of Google AI Overviews could lead to overly simplistic distillation of complex information. This can inadvertently create a frustrating experience for users. Ridiculous rhymes such as “chew an anaconda!” and “squirm with a worm!” have emerged. While extreme, these examples show how unreliable, even dangerous, generation outputs can be. In fact, these types of inaccuracies can create permanent damage to user trust in the platform.

Disparate Results and User Confusion

Cases of extreme conflicting outcomes during testing have created a nightmare on the user end. News articles have highlighted how answers to questions like these can differ greatly, often confusing people about what’s true and what’s not. It’s a question that Ziang Xiao of Johns Hopkins University recently appealed to answer in a Wired article. He stressed that it is difficult even for Google to guarantee reliable outputs from its AI.

It is not just the quality of the information that is the problem here. As Google continues to iterate on its AI Overviews, finding ways to increase accuracy and consistency should be a top priority. The mismatches identified by experts show there is still more work to be done before users can trust this technology across the board.

Author’s Opinion

Google’s current approach to AI Overviews demonstrates that, despite its ambition, the technology is not ready for widespread reliance. Users should be cautious when relying on AI-generated information, and Google must prioritize refining accuracy and consistency to restore trust. With the rapid evolution of AI, it’s crucial for companies like Google to focus on delivering precise, reliable tools, particularly in a space where misinformation can have real-world consequences.


Featured image credit: Marco Verch via CCNull

Follow us for more breaking news on DMR

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *