Microsoft takes down AI chatbot ‘with zero chill’, Tay after racist remarks

The more you chat with Tay the smarter she gets, so the experience can be more personalized for you

By: AP | San Francisco | Updated: March 25, 2016 7:25 pm
Microsoft Tay AI chatbot ends up in a racist blur after reflecting on things it was told online (Source: TayTweets/Twitter) Microsoft Tay AI chatbot ends up in a racist blur after reflecting on things it was told online (Source: TayTweets/Twitter)

OMG! Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl? It was totally yanked offline in less than a day after it began spouting racist, sexist and otherwise offensive remarks.

Microsoft said it was all the fault of some really mean people, who launched a “coordinated effort” to make the chatbot known as Tay “respond in inappropriate ways.” To which one artificial intelligence expert responded: Duh!

Well, he didn’t really say that. But computer scientist Kris Hammond did say, “I can’t believe they didn’t see this coming.”

Microsoft said its researchers created Tay as an experiment to learn more about computers and human conversation. On its website, the company said the program was targeted to an audience of 18 to 24-year-olds and was “designed to engage and entertain people where they connect with each other online through casual and playful conversation.”

In other words, the program used a lot of slang and tried to provide humorous responses when people sent it messages and photos. The chatbot went live on Wednesday, and Microsoft invited the public to chat with Tay on Twitter and some other messaging services popular with teens and young adults.

“The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” the company said.

But some users found Tay’s responses odd, and others found it wasn’t hard to nudge Tay into making offensive comments, apparently prompted by repeated questions or statements that contained offensive words. Soon, Tay was making sympathetic references to Hitler – and creating a furor on social media.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” Microsoft said in a statement.

While the company didn’t elaborate, Hammond says it appears Microsoft made no effort to prepare Tay with appropriate responses to certain words or topics. Tay seems to be a version of “call and response” technology, added Hammond, who studies artificial intelligence at Northwestern University and also serves as chief scientist for Narrative Science, a company that develops computer programs that turn data into narrative reports.

“Everyone keeps saying that Tay learned this or that it became racist,” Hammond said. “It didn’t.” The program most likely reflected things it was told, probably more than once, by people who decided to see what would happen, he said.

The problem is that Microsoft turned Tay loose online, where many people consider it entertaining to stir things up – or worse. The company should have realized that people would try a variety of conversational gambits with Tay, said Caroline Sinders, an expert on “conversational analytics” who works on chat robots for another tech company. (She asked not to identify it because she wasn’t speaking in an official capacity.) She called Tay “an example of bad design.”

Instead of building in some guidelines for how the program would deal with controversial topics, Sinders added, it appears Tay was mostly left to learn from whatever it was told.

“This is a really good example of machine learning,” said Sinders. “It’s learning from input. That means it needs constant maintenance.”

Sinders said she hopes Microsoft will release the program again, but only after “doing some work” on it first.

Microsoft said it’s “making adjustments” on Tay, but there was no word on when Tay might be back. Most of the messages on its Twitter account were deleted by Thursday afternoon.

“c u soon humans need sleep now so many conversations today thx,” said the latest remaining post.

For all the latest Technology News, download Indian Express App

  1. P
    Padmanabhan S
    Mar 25, 2016 at 12:42 pm
    Wild Imaginations are Distant Realities - Ari Aurobindo had said. The news about TAY only reminded me of the short story EPICAC - the super computer, written by Kurt Vonnegaut Jr. in 1950. This story is a part of the book WELCOME TO MONKEYHOUSE, a collection of short stories written by the same author. Please do read the collection and you will wonder the author's wild imaginations coming true.
    Reply
  2. N
    NArayan
    Mar 25, 2016 at 12:42 pm
    you create a program to learn from people and act like people. then you blame the people because the program behaves like the people, which it was supposed to do. some serious rethinking needed.....
    Reply
  3. S
    Stefania
    Mar 25, 2016 at 11:24 am
    The Emotion processing Unit from Emoshape denmonstrate to the South Korean TV last week how the EPU can prevent an AI of wrong doing. The AI was asked questions about how does she feels about Hitler and Obama and results were compared. Instantly the AI felt Angry and disgust by Hitler even if she knew nothing about him before.
    Reply