Microsoft apologizes for its rogue Twitter chatbot's 'wildly inappropriate and reprehensible words and images'

Advertisement

Microsoft apologized for racist and "reprehensible" tweets made by its chatbot and promised to keep the bot offline until the company is better prepared to counter malicious efforts to corrupt the bot's artificial intelligence.

Advertisement

In a blog entry on Friday, Microsoft Research head Peter Lee expressed regret for the conduct of its AI chatbot, named Tay, and explained what went wrong.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee writes.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Earlier this week, Microsoft launched Tay - a bot ostensibly designed to talk to users on Twitter like a real millennial teenager and learn from the responses.

But it didn't take things long to go awry, with Microsoft forced to delete her racist tweets and suspend the experiment.

Advertisement

"Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Lee writes.

An organized effort of trolls on Twitter quickly taught Tay a slew of racial and xenophobic slurs. Within 24 hours of going online, Tay was professing her admiration for Hitler, proclaiming how much she hated Jews and Mexicans, and using the n-word quite a bit.

In the blog entry, Lee explains that Microsoft's Tay team was trying to replicate the success of its Xiaoice chatbot, which is a smash hit in China with over 40 million users, for an American audience. Given that they never had this kind of problem with Xiaoice, Lee says, they didn't anticipate this attack on Tay.

And make no mistake, Lee says, this was an attack.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee writes.

Advertisement

Ultimately, Lee says, this is a part of the process of improving AI, and Microsoft is working on making sure Tay can't be abused the same way again.

"To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process," Lee writes.

Still, it raises a lot of questions about the future of artificial intelligence: If Tay is supposed to learn from us, what does it say that she was so easily and quickly "tricked" into racism?

NOW WATCH: Scientists developed a robot arm that can catch anything you throw at it