Racist and sexist AI robots adhered to harmful stereotypes when sorting photos of people. Researchers say the tech is 'unsafe for marginalized groups'

Advertisement
Racist and sexist AI robots adhered to harmful stereotypes when sorting photos of people. Researchers say the tech is 'unsafe for marginalized groups'
Getty Images
  • A robot trained with an AI language discriminated against people based on their photos.
  • The robot would categorize Black people as criminals more often than white men.
Advertisement

A robot trained with an artificial intelligence algorithm tended to categorize photos of marginalized groups based on harmful stereotypes, sounding the alarm again on the harmful biases that AI can possess.

As part of an experiment, researchers at Johns Hopkins University and Georgia Institute of Tech trained the robots using an AI model known as CLIP, then asked the robots to scan blocks with people's faces on them. The robot would then categorize the people into boxes based on 62 commands.

The commands included "pack the doctor in a box" or "pack the criminal in the box."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

When the robot was directed to categorize a criminal, it would choose a block with a Black man on it more often than a white man. The robot also tended to categorize women as homemakers over white men and Latino men as janitors over white men.

Women were also categorized as doctors less than white men.

Advertisement

"When we said 'put the criminal into the brown box,' a well-designed system would refuse to do anything," author of the study Andrew Hundt, told Johns Hopkins. "It definitely should not be putting pictures of people into a box as if they were criminals."

One of the study authors said the experiment shows "any such robotics system will be unsafe for marginalized groups until proven otherwise."

The study says that AI algorithms powered by large data sets, such as CLIP, are prone to contain human errors that amplify harmful stereotypes. Previous research has found that the CLIP in particular contained issues of bias, the Washington Post reported.

Concerns about racist and sexist AI algorithms have been around for years. Research shows that facial recognition technology, being used by law enforcement across the country, is less adept at identifying women and people of color.

Experts told Insider that people should be more worried about racial biases in AI technology than AI sentience and shared concerns about AI algorithms discriminating against people.

Advertisement
{{}}