Google is trying to be 'safe and responsible' with AI, says the engineer who got fired after sentience claim

Advertisement
Google is trying to be 'safe and responsible' with AI, says the engineer who got fired after sentience claim
Blake Lemoine said Google has "far more advanced technology" it hasn't released yet.Getty Images
  • The Google engineer fired after saying an AI chatbot was sentient said it's being "responsible".
  • Alphabet wasn't "being pushed around by OpenAI", Blake Lemoine told Futurism.
Advertisement

A Google engineer who was fired after saying its AI chatbot gained sentience said the company is approaching artificial intelligence in a "safe and responsible" way.

Blake Lemoine, a former member of Google's Responsible AI team, told Futurism he doesn't think Google is "being pushed around by OpenAI" and that the company behind ChatGPT had not affected "Google's trajectory."

"I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something," he said.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Lemoine also claimed Bard was in development in mid-2021, well before ChatGPT was released in late 2022.

"It wasn't called Bard then, but they were working on it, and they were trying to figure out whether or not it was safe to release it," he said. "They were on the verge of releasing something in the fall of 2022. So it would have come out right around the same time as ChatGPT, or right before it. Then, in part because of some of the safety concerns I raised, they deleted it."

Advertisement

The engineer, who joined Google in 2015 according to his LinkedIn profile, also told Futurism that the company has "far more advanced technology" that it hasn't released yet.

He said a product that essentially had the same capabilities as Bard could've been released two years ago, but Google has been "making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that."

Lemoine told The Washington Post last June he believed Google's Language Model for Dialogue Applications (LaMDA) became a sentient entity after he chatted with it. He also shared an "interview" he carried out with LaMDA in a Medium post, which he claimed was evidence of its independent thoughts.

He was fired later that month as Google claimed he violated its confidentiality policy. A company representative told Insider at the time that his sentience claims were unsupported and there wasn't any evidence to suggest it had consciousness.

Google didn't immediately respond to a request for comment from Insider, made outside normal working hours.

{{}}