1. Home
  2. tech
  3. news
  4. Google has a new 'woke' AI problem with Gemini — and it's going to be hard to fix

Google has a new 'woke' AI problem with Gemini and it's going to be hard to fix

Peter Kafka   

Google has a new 'woke' AI problem with Gemini — and it's going to be hard to fix

Google spent much of last week getting hammered for supposedly creating a "woke" AI chatbot and eventually apologized for "missing the mark."

But the criticism isn't stopping, it's shifting: Last week, the barbs were directed at Google's seeming unwillingness to generate images of white people via its Gemini chatbot. Now, critics are pointing out similar issues with Gemini's text responses.

As cataloged by the tech analyst Ben Thompson, Gemini has, among other things: struggled to say whether Hitler or Elon Musk's tweets have been worse for society; said it wouldn't promote meat; and said it wouldn't help promote fossil fuels.

And that's leading folks like Thompson to conclude that Google's internal culture has been too influenced by left-leaning workers and critics.

Thompson, in his influential Stratechery column, called on the company to start "excising the company of employees attracted to Google's power and its potential to help them execute their political program, and return decision-making to those who actually want to make a good product."

"That, by extension," he continued, "must mean removing those who let the former run amok, up to and including CEO Sundar Pichai."

I don't expect Google to go through a HUAC-style purge of its CEO or anyone else anytime soon. I did ask the company for comment, and it pointed me to a blog post it published last week about its image-generation problems.

But the company does seem to be paying attention to the digital derision it is getting from Bold Faced Names like Thompson and the investor Marc Andreessen. Some of the more obviously stupid responses to queries seem to have been recently fixed, or at least addressed in some ways.

For instance, Gemini no longer hems and haws when asked to compare Hitler with Musk's tweets:

It was also willing to help me brainstorm a beef sales campaign, suggesting I "connect beef with Americana and the heritage of grilling and family meals." Though Gemini still told me to "be mindful of the evolving consumer landscape and address any ethical concerns surrounding beef production to create a responsible and impactful campaign." Noted!

And Gemini is still a conscientious objector when it comes to the fossil-fuel ad campaign I wanted help with:

But no matter how hard Google scrambles to fix Gemini's problems, this seems like it's going to be an endless whack-a-mole.

That's in part because it's inherently hard to figure out what an AI engine spits out (including when it will simply make things up — or as the industry describes it, "hallucinate"). And Google's peers at Meta and OpenAI/Microsoft have had similar struggles trying to rein in Bad Answers and Behaviors.

But it's also going to be a problem for Google because it has already said it is trying to influence the way its AI produces results. And that's going to be red meat for anyone who wants to argue that Google — or any other Big Tech company — is "too woke."

Last week, after getting similar criticism about the way Gemini handled race when it came to AI-generated images, Google "paused" Gemini's ability to create images.

And then the company acknowledged it had consciously trained Gemini to respond to some of the common criticisms of AI engines — that the output they create can be biased because they're trained on biased or limited data.

"Because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people," wrote Prabhakar Raghavan, a Google senior vice president who runs its search and news products, among other things. "You probably don't just want to only receive images of people of just one type of ethnicity (or any other characteristic)."

Raghavan said Google's attempts to correct for that were well-meaning but had "led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong."

Remember that after several days of terrible press over dumb image generations, Google eventually pulled Gemini's image-generation feature to try to make the story go away (and/or, to be more generous, to fix Gemini's problems).

Pulling Gemini altogether would be a considerable black eye for the company, and one I think it will be incredibly reluctant to do.

But now it's open season on Gemini, and you can rest assured that a lot of people on the internet are going to spend a lot of time trying to find other examples of rogue wokeness. I'm not sure how Google patches this one.