Microsoft's genocidal AI chatbot is broken again

Advertisement

nazis swastika white supremacists racist racism white power

Scott Olson/Getty Images

Microsoft's AI bot Tay openly called for genocide multiple times.

Microsoft's AI division is not having a good week.

Advertisement

The tech company recently launched "Tay" - an AI chatbot a bot that responded to users' queries and emulated the casual, jokey speech patterns of a stereotypical millennial. The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But the experiment descended into farce after Tay "learned" to be a genocidal racist - calling for the extermination of Jews and Mexicans, insulting women, and denying the existence of the Holocaust.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Microsoft shut Tay down and deleted some of her most inflammatory tweets after just 24 hours, and subsequently apologised. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Microsoft Research Head Peter Lee wrote.

Microsoft reactivated Tay on late Tuesday/early Wednesday - and the bot is already broken again.

Advertisement

Tay isn't spewing White Supremacist slogans this time. Instead, the bot seems to have got in a loop replying to itself, saying "You are too fast, please take a rest..." dozens and dozens (likely hundreds!) of times.

microsoft robot tay ai repeating too fast

Twitter

At a guess - Tay has that message set as an automatic response to anyone who tries to interact with her too much, to avoid being overwhelmed with spam. But it seems she sent it to herself, forcing her into an endless feedback loop until someone pulled the plug.

In an emailed statement, a Microsoft spokesperson indicated that Tay's return was accidental. "Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on twitter for a brief period of time."

The account @TayandYou is now private, meaning that if you don't already follow it, you can't see any of its tweets. Thousands of tweets are also being deleted, and Tay now unresponsive, not replying to any tweets or direct messages.

Advertisement

But on the bright side: This new meltdown isn't nearly as embarrassing as Tay's previous outbursts.

For example - here it is denying the Holocaust.

tay holocaust microsoft

Twitter

And here's Tay advocating genocide.

tay genocide microsoft twitter

Twitter

Advertisement

In some - but by no means all - cases, users were able to "trick" Tay into tweeting incredibly racist messages by asking her to repeat them. Here's an example of that.

tay microsoft genocide slurs

Twitter

But in other instances, Tay just sent wildly inappropriate responses. For example, here's the bot endorsing the "Fourteen Words," a notorious White Supremacist slogan.

tay microsoft fourteen words

Twitter

Tay was clearly programmed with very few filters on what she could say - there wasn't even a block on the "N-word."

Advertisement

microsoft twitter tay racial slurs

Twiter

Microsoft has come under heavy criticism for its creation of Tay - particularly its lack of filters. Zoe Quinn, a games developer who has been a prominent target of online abuse, was called a "stupid whore" by Tay. She wrote on Twitter: "It's 2016. If you're not asking yourself 'how could this be used to hurt someone' in your design/engineering process, you've failed."

In the aftermath, Microsoft research head Peter Lee apologised in a blog post: "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."

NOW WATCH: A Florida woman was arrested after live-streaming herself 'drunk driving'