DMR News

Advancing Digital Conversations

Google’s AI Search Features Generate Humorous Yet Concerning Errors

ByHuey Yee Ong

May 25, 2024

Google’s AI Search Features Generate Humorous Yet Concerning Errors

Google’s recent enhancement to its search engine, designed to improve user interaction by providing quick, AI-generated summaries for queries, has instead garnered attention for its inaccurate and often humorous outputs.

The AI suggested bizarre activities like “running with scissors” as beneficial, mistakenly citing a comedy blog, Little Old Lady Comedy, as a factual source. Such glaring errors have not only become a subject of ridicule on social media but also raise serious questions about the AI’s ability to discern reliable information.

Public Red Teaming of Google’s AI

The nature of these AI mistakes has led users to perform an unintended form of red teaming—testing security measures by simulating attacks. In cybersecurity, red teams help identify vulnerabilities in systems.

Similarly, everyday users are highlighting flaws in Google’s AI by sharing its nonsensical responses online. This has drawn comparisons to how companies use red teams to enhance product security before public release, a practice that seems ironically fitting given the high-profile nature of Google’s missteps.

How Serious Are Google AI’s Errors?

These AI-generated errors are not isolated incidents. They range from suggesting the addition of glue to pizza to enhance cheese adhesion—a bizarre tip traced back to an old Reddit comment—to hazardous medical advice regarding the treatment of rattlesnake bites, where the AI incorrectly proposed applying a tourniquet and sucking out the venom, contrary to established medical guidelines. Other notable mistakes include the AI misidentifying a toxic mushroom as an edible variety, which could have dangerous implications.

Despite these issues, Google has responded by emphasizing that these instances stem from uncommon queries that do not represent the typical user experience. The company stated to TechCrunch that extensive testing was conducted before launching this feature and that it would use these occurrences to refine their systems further.

Despite Google’s reassurances, the continuous emergence of these errors from a tech giant known for its sophisticated AI capabilities is perplexing and troubling. The situation underscores the challenges of training AI models on diverse internet data, where the veracity of information varies widely.

The AI’s failure to accurately process this data has led to public scrutiny over the value of multi-million dollar content licensing deals aimed at improving AI capabilities, such as Google’s contract with Reddit and similar agreements speculated between other tech firms and content platforms.


Related News:


Featured Image courtesy of Christoph Dernbach/dpa/picture alliance via Getty Images

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *