Don't worry about AI becoming sentient. Do worry about it finding new ways to discriminate against people.

Advertisement
Don't worry about AI becoming sentient. Do worry about it finding new ways to discriminate against people.
Getty Images
  • A story about a Google engineer saying the company had created a sentient AI recently went viral.
  • Google's AI chatbot is not sentient, seven experts told Insider.
Advertisement

First the good news: sentient AI isn't anywhere near becoming a real thing. Now the bad news: there are plenty of other problems with AI.

A story about a supposedly sentient AI recently went viral. Google engineer Blake Lemoine revealed his belief that a company chatbot named LaMDA (Language Model for Dialogue Applications) had achieved sentience.

Seven AI experts who talked to Insider were unanimous in their dismissal of Lemoine's theory that LaMDA was a conscious being. They included a Google employee who has worked directly with the chatbot.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

However, AI doesn't need to be clever to do serious damage, experts told Insider.

AI bias, when it replicates and amplifies historical human discriminatory practices, is well documented.

Advertisement

Facial recognition systems have been found to display racial and gender bias, and in 2018 Amazon shut down a recruitment AI tool it had developed because it was consistently discriminating against female applicants.

"When predictive algorithms or so-called 'AI' are so widely used, it can be difficult to recognise that these predictions are often based on little more than rapid regurgitation of crowdsourced opinions, stereotypes, or lies," says Dr Nakeema Stefflbauer, a specialist in AI ethics and CEO of women in tech network Frauenloop.

"Maybe it's fun to speculate on how 'sentient' the auto-generation of historically correlated word strings appears, but that's a disingenuous exercise when, right now, algorithmic predictions are excluding, stereotyping, and unfairly targeting individuals and communities based on data pulled from, say, Reddit," she tells Insider.

Professor Sandra Wachter of the University of Oxford detailed in a recent paper that not only does AI show bias against protected characteristics like race and gender, it finds new ways to categorize and discriminate against people.

For example, which browser you use to apply for a job could mean AI recruitment systems either favor or derank your application.

Advertisement

Wachter's concern is the lack of legal framework to stop AI finding new ways in which to discriminate.

"We know that AI picks up patterns of past injustice in hiring, lending or criminal justice and transports them into the future. But AI also creates new groups that are not protected under the law to make important decisions," she says.

"These issues need urgent answers. Let's address these first and worry about sentient AI if and when we are actually close to crossing that bridge," Wachter adds.

Laura Edelson, computer science researcher at New York University, says AI systems also provide a get-out for people who use them when they turn out to be discriminatory.

"A common use case for machine learning systems is to make decisions that humans don't want to make as a way of abdicating responsibility. 'It's not me, it's the system'," she tells Insider.

Advertisement

Stefflbauer believes the hype around sentient AI actively overshadows more pressing issues around AI bias.

"We are derailing the work of world-class AI ethics researchers who have to debunk these stories of algorithmic evolution and 'sentience' such that there's no time or media attention given to the growing harms that predictive systems are enabling."

{{}}