scorecard
  1. Home
  2. tech
  3. Microsoft apologizes for its rogue Twitter chatbot's 'wildly inappropriate and reprehensible words and images'

Microsoft apologizes for its rogue Twitter chatbot's 'wildly inappropriate and reprehensible words and images'

Microsoft apologizes for its rogue Twitter chatbot's 'wildly inappropriate and reprehensible words and images'

Microsoft apologized for $4 and promised to keep the bot offline until the company is better prepared to counter malicious efforts to corrupt the bot's artificial intelligence.

In a $4 on Friday, Microsoft Research head Peter Lee expressed regret for the conduct of its AI chatbot, named Tay, and explained what went wrong.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee writes.

Earlier this week, Microsoft launched Tay - a bot ostensibly designed to talk to users on Twitter like a real millennial teenager and learn from the responses.

But it didn't take things long to go awry, with Microsoft forced to delete her racist tweets and suspend the experiment.

"Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Lee writes.

An organized effort of trolls on Twitter quickly taught Tay a slew of racial and xenophobic slurs. Within 24 hours of going online, Tay was professing her admiration for Hitler, proclaiming how much she hated Jews and Mexicans, and using the n-word quite a bit.

In the blog entry, Lee explains that Microsoft's Tay team was trying to replicate the success of its Xiaoice chatbot, which is a $4, for an American audience. Given that they never had this kind of problem with Xiaoice, Lee says, they didn't anticipate this attack on Tay.

And make no mistake, Lee says, this was an attack.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee writes.

Ultimately, Lee says, this is a part of the process of improving AI, and Microsoft is working on making sure Tay can't be abused the same way again.

"To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process," Lee writes.

Still, it raises a lot of questions about the future of artificial intelligence: If Tay is supposed to learn from us, what does it say that she was so easily and quickly "$4" into racism?

NOW WATCH: $4

READ MORE ARTICLES ON



Popular Right Now



Advertisement