Microsoft says it faces 'difficult' challenges in AI design after chatbot Tay turned into a genocidal racist

Advertisement

microsoft tay

Twitter

Microsoft AI chatbot Tay.

Microsoft has admitted it faces some "difficult" challenges in AI design after its chatbot "Tay" had an offensive meltdown on social media.

Advertisement

Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chatbot turned into a genocidal racist on Twitter.

In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult - and yet exciting - research challenges in AI design.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.

"To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."

Advertisement

Tay, an AI bot aimed at 18-24 year olds, was deactivated within 24 hours of going live after she made a number of Tweets that were highly offensive. Microsoft began by simply deleting Tay's inappropriate Tweets before turning her off completely.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Microsoft's aim with the chatbot was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But Tay proved a smash hit with racists, trolls, and online troublemakers from websites like 4chan - who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

Lee added: "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."

Advertisement

AI expert Azeem Azhar told Business Insider on Thursday that Microsoft could have taken a number of precautionary steps that would have stopped Tay behaving in the way she did.

"It wouldn't have been too hard to create a blacklist of terms; or narrow the scope of replies," he said. "They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses."

NOW WATCH: This bed automatically makes itself three seconds after you get up