microsoft__logo

As part of an experiment on conversational understanding for artificial intelligence, Microsoft recently introduced an AI chatbot called Tay. The chatbot was linked up to Twitter and people could tweet at it to get a response. It didn’t take more than 24 hours to turn the chatbot into a hate-spewing racist since it picked up all sorts of wrong ideas. Microsoft has now apologized for the entire episode.

The company explains in a blog post that few human users on Twitter exploited a flaw in Tay to transform it into a bigoted racist. Microsoft doesn’t go into detail about what this flaw was.

It’s believed that many users exploited Tay’s “repeat after me” feature which enabled users to get Tay to repeat whatever they tweeted at it. Naturally, trolls tweeted sexist, racist and abusive things at Tay which it repeated word for word.

Microsoft first started cleaning up Tay’s timeline to do some damage control but the situation quickly got out of hand. As the tweets went from bad to worse, the company decided to pull the plug on Tay, and now it’s having to apologize for the entire episode.

“We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” writes Peter Lee, the corporate vice president of Microsoft Research.

Filed in Computers. Read more about and . Source: blogs.microsoft

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading