scorecardWhen AI ‘hallucinates’: Here’s why Google’s AI Overview appears to be generating inaccurate results
  1. Home
  2. tech
  3. AI
  4. news
  5. When AI ‘hallucinates’: Here’s why Google’s AI Overview appears to be generating inaccurate results

When AI ‘hallucinates’: Here’s why Google’s AI Overview appears to be generating inaccurate results

When AI ‘hallucinates’: Here’s why Google’s AI Overview appears to be generating inaccurate results
Tech2 min read
It's been less than two weeks since Google introduced “AI Overview” in its search engine in the US, and the launch does not seem to be moving smoothly for them. The feature is facing public criticism due to a few 'nonsensical and inaccurate' responses without an option for users to opt-out.

AI Overview is supposed to provide quick summaries at the top of Google Search results. And while it does get a lot of things right, social media users have highlighted numerous instances where the tool has given incorrect or controversial answers. For example, a user shared that in response to the query about how many Muslim presidents the U.S. has had, AI Overview inaccurately stated: “The United States has had one Muslim president, Barack Hussein Obama.”

Another major issue with AI Overview appears to be the attribution of inaccurate information to credible sources like medical professionals or scientists. For instance, when asked how long one can stare at the sun for best health, the tool claimed: “According to WebMD, scientists say that staring at the sun for 5-15 minutes, or up to 30 minutes if you have darker skin, is generally safe and provides the most health benefits.”

The tool’s capability to generate images based on user prompts has also resulted in historical inaccuracies and questionable content, reports indicate. One example is when a user requested an image of a German soldier from 1943, and the tool produced a racially diverse set of soldiers in German military uniforms.

What’s the real issue with Google’s AI Overview?


According to experts from the field, these aforementioned issues with Google’s AI Overview are called “AI hallucinations”, where generative AI models present false or misleading information as facts. These hallucinations stem from flawed training data, algorithmic errors or misinterpretations of context, CNET reports.

Large language models like those used by Google, Microsoft and OpenAI predict future data based on patterns from past data. However, AI Overview appears unable to distinguish between reliable data and inaccurate content, often pulling information from parody posts, bad jokes and satirical websites. This highlights the ease with which incorrect content can infiltrate AI Overview.

All in all, the inaccurate results reveal a significant problem with grounding and fact-checking in Google’s latest feature, along with its inability to perform sanity checks on the generated content.

Google has since released a statement in response to this hubbub, with a spokesperson reiterating that the majority of AI Overviews do provide accurate information with verification links. She added that many problematic examples on social media are either “uncommon queries” or “doctored examples that we couldn’t reproduce”.

The statement further read: “We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

READ MORE ARTICLES ON




Advertisement