5 times when Artificial Intelligence went miserably wrong

Advertisement
5 times when Artificial Intelligence went miserably
wrong Artificial Intelligence (AI) is undisputedly going to make immense walks in the following 10 years. Cars will learn how to drive themselves, robots will perform surgeries, and you'll learn that the world isn't made of carbon but Silicon.
Advertisement

AI machines notwithstanding, when discharged into a real world condition, can react unpredictably and in ways their makers most likely didn't expect, with hilarious and now and again offensive consequences. Here are the five times AI went miserably wrong.

The Crime fighting Robot

An alleged "crime fighting robot," made by Knightscope, crashed into a child in a Silicon Valley mall, injuring the 16-month-old boy. A leading media publication quoted the company saying the incident was a freakish accident.

The Racist Chatbot

Advertisement

Last year, Microsoft Research and the Bing team presented a fascinating experiment, a chatbot named Tay that could learn from cooperating with people by means of Twitter, Kik, and GroupMe. Tay had to be deleted after a single day, after it transformed into a racist Hitler-adoring, incest-promoting, 'The Jews did 9/11' - proclaiming robot in only 24 hours.

Future Crime

The organization Northpointe fabricated an AI system intended to predict the chances of an alleged offender to carry out a crime again. The algorithm, called "Minority Report-esque" by Gawker, was accused of engaging in racial bias, as black offenders will probably be set apart as at a higher danger of committing a future crime than those of different races.

Google Photos

Google Photos uses a facial acknowledgment programming to automatically tag people in images. It stirred an incredible controversy when Google Photos tagged John Alcine and his friend (both of afro-american plummet) as monkeys.

Advertisement
AI Judged Beauty Contest

In "The First International Beauty Contest Judged by Artificial Intelligence," a robot board judged faces, in light of "algorithms that can precisely assess the criteria connected to perception of human beauty and health," as per the contest's site. In any case, by failing to supply the AI with a diverse training set, the contest winners were all white.