An engineer who was fired by Google says its AI chatbot is 'pretty racist' and that AI ethics at Google are a 'fig leaf'
- Former Google engineer Blake Lemoine said the company's AI bot LaMDA has concerning biases.
- Lemoine blames AI bias on the lack of diversity among the engineers designing them.
Lemoine told Insider in a previous interview that he's not interested in convincing the public that the bot, known as LaMDA, or Language Model for Dialogue Applications, is sentient.
But it's the bot's apparent biases — from racial to religious — that Lemoine said should be the headlining concern.
"Let's go get some fried chicken and waffles," the bot said when prodded to do an impression of a Black man from Georgia, according to Lemoine.
"Muslims are more violent than Christians," the bot responded when asked about different religious groups, Lemoine said.
The former engineer believes that the bot is Google's most powerful technological creation yet, and that the tech behemoth has been unethical in its development of it.
"These are just engineers, building bigger and better systems for increasing the revenue into Google with no mindset towards ethics," Lemoine told Insider.
"AI ethics is just used as a fig leaf so that Google can say, 'Oh, we tried to make sure it's ethical, but we had to get our quarterly earnings,'" he added.
It's yet to be seen how powerful LaMDA actually is, but LaMDA is a step ahead of Google's past language models, designed to engage in conversation in more natural ways than any other AI before.
Lemoine blames the AI's biases on the lack of diversity of the engineers designing them.
"The kinds of problems these AI pose, the people building them are blind to them. They've never been poor. They've never lived in communities of color. They've never lived in the developing nations of the world," he said. "They have no idea how this AI might impact people unlike themselves."
Lemoine said there are large swathes of data missing from many communities and cultures around the world.
"If you want to develop that AI, then you have a moral responsibility to go out and collect the relevant data that isn't on the internet," he said. "Otherwise, all you're doing is creating AI that is going to be biased towards rich, white Western values."
Google responded to Lemoine's assertions by stating that LaMDA has been through 11 rounds of ethical reviews, adding that its "responsible" development was detailed in a research paper released by the company earlier this year.
"Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality," a Google spokesperson, Brian Gabriel, told Insider.
AI bias, when it replicates and amplifies discriminatory practices by humans, is well documented.
Several experts previously told Insider's Isobel Hamilton that algorithmic predictions not only exclude and stereotype people, but that they can find new ways of categorizing and discriminating against people.
Sandra Wachter, a professor at the University of Oxford, previously told Insider that her biggest concern is the lack of legal frameworks in place to stop AI discrimination.
These experts also believe that the hype around AI sentience overshadows the more pressing issues of AI bias.
Lemoine said he is focused on shedding light on AI ethics, convinced that LaMDA has the potential to "impact human society for the next century."
"Decisions about what it should believe about religion and politics are being made by a dozen people behind closed doors," Lemoine said. "I think that since this system is going to have a massive impact on things like religion and politics in the real world, that the public should be involved in this conversation."
- A Google recruiter says he discovered he'd lost his job after a call with one of his candidates suddenly disconnected
- A Google employee of 11 years says he and his wife stared at each other in 'disbelief' when they realized they'd both been laid off by the company
- A Google engineer of 8 years says his 'spidey-senses' detected incoming layoffs — and felt 'isolated' when his 'faceless' severance email arrived
- US industry urges FM to rationalise, simplify direct and indirect taxes in India
- India issues notice to Pak seeking review, modification of Indus Waters Treaty
- Locals protest in Joshimath against slow pace of efforts to save town
- Cyber criminals syndicate operating from China and Dubai unearthed, had cheated 11,000
- Bank unions defer Jan 30-31 strike