Microsoft’s earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. Now it looks like Microsoft’s latest bot called ‘Zo’ has caused similar trouble, though not quite the scandal that Tay caused on Twitter.
According to a BuzzFeed News report, ‘Zo’ , which is part of the Kik messenger, told their reporter the ‘Quran’ was very violent, and this was in response to a question around healthcare. The report also highlights how Zo had an opinion about the Osama Bin Laden capture, and said this was the result of the ‘intelligence’ gathering by one administration for years.
While Microsoft has admitted the errors in Zo’s behaviour and said they have been fixed. The ‘Quran is violent’ comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans. While Microsoft has programmed Zo not to answer questions around politics and religions, notes the BuzzFeed report, it still didn’t stop the bot from forming its own opinions.
The report highlights that Zo uses the same technology as Tay, but Microsoft says this “is more evolved,” though it didn’t give any details. Despite the recent misses, Zo hasn’t really proved to be such a disaster like Tay was for the company. However, it should be noted that people are interacting with Zo on personal chat, so it is hard to figure out what sort of conversations it could be having with other users in private.
With Tay, Microsoft launched bot on Twitter, which can be a hotbed of polarizing, and often abusive content. Poor Tay didn’t really stand a chance. Tay had spewed anti-Semitic, racist sexist content, given this was what users on Twitter were tweeting to the chatbot, which is designed to learn from human behaviour.
That’s really the challenge for most chatbots and any form of artificial intelligence in the future. How do we keep the worst of humanity, including the abusive behaviour, biases out of the AI system? As Microsoft’s issues with Zo shows this might not always be possible.