scorecard
  1. Home
  2. tech
  3. news
  4. Google is scaling back its AI search plans after the summary feature told people to eat glue

Google is scaling back its AI search plans after the summary feature told people to eat glue

Shubhangi Goel   

Google is scaling back its AI search plans after the summary feature told people to eat glue
  • Google is scaling back AI-generated answers in search results after users noticed errors.
  • The AI Overviews feature was launched two weeks ago and has faced backlash for false and absurd responses.

Google is pulling back the use of AI-generated answers in search results after the feature made some infamous errors, including telling users to put glue in their pizza sauce.

Google launched AI Overviews two weeks ago, putting AI-generated summaries of search results at the top of the page for US users. Over the past few days, users, including an SEO expert, have noticed fewer of the overviews and suspected the tech giant is taking them down a notch after receiving criticism. It's not possible to turn off the AI feature while using the search engine.

Google's head of search, Liz Reid, confirmed in a blog post on Thursday that the company was addressing some of these issues.

The changes have come after examples of AI Overviews going haywire — and faked screenshots of the feature — flooded the internet. These included search responses claiming that Barack Obama was a Muslim president, that Africa had no countries beginning with the letter K, and that people should eat "at least one small rock per day."

Google's new guardrails include detecting "nonsensical queries" that shouldn't show AI results, limiting satire or humor content, and introducing restrictions for prompts in which AI results wouldn't be helpful because there isn't enough data about that topic.

Google's own ads show the erroneous summaries aren't limited to a few viral queries. In a demo video released two weeks ago, the Overview feature gave the wrong advice on how to fix a film camera.

Reid also said in her blog post that Google had limited content from forums or social media, which could have misleading advice.

"Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza," Reid wrote in the post.

Reid wrote that the company already had systems in place to not show AI-generated news or health-related results. She said harmful results that encouraged people to smoke while pregnant or leave their dogs in cars were "faked screenshots."

The list of changes is the latest example of the Big Tech giant launching an AI product and circling back with restrictions after things get messy.

Earlier this year, Google AI's image-generating feature came under fire for refusing to produce pictures of white people. It was criticized for being too "woke" and creating images with historical inaccuracies, such as images of Asian Nazis and Black founding fathers. In a blog post a few weeks later, Google leadership apologized and paused the feature.




Advertisement