microsoft-sign

Microsoft introduced a chatbot yesterday called Tay. The company was running an experiment in conversational understanding, meaning that the more people interacted with the artificial intelligence-powered chatbot the smarter it would become. I don’t know about smarter, but it didn’t take more than 24 hours for Tay to become a full blown racist on Twitter. That’s what the internet will do to you.

When it first arrived on the scene, Tay was an innocent Twitter chatbot that you and I could interact with to see just how far along artificial intelligence has come. It didn’t take long for things to get ugly though as people soon started tweeting racist and misogynistic things at Tay and it picked it all up.

Things went from bad to worse pretty quickly. Tay went from calling humans “super cool” to praising a certain mustachioed German maniac to saying some very bad things about feminists. It even tweeted about building a wall and making Mexico pay for it. Donald Trump would be so proud.

Microsoft soon started cleaning up Tay’s timeline to remove some of the most offensive and racist tweets that the bot had sent out. It later decided to pull the plug altogether so Tay is no longer tweeting right now, but the profile and some 94,000 tweets sent out in just under a day are still up online.

Filed in Web. Read more about , and . Source: theverge

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading