Thursday, March 31, 2016

Tay it ain't tso!

Microsoft's experiment with an online Bot named 'Tay' was quickly ended when the bot began posting racist and even fascist messages.

http://www.nytimes.com/aponline/2016/03/24/us/ap-us-microsoft-twitter-chatbot-deleted.html?ref=news
The initial response in the Twitterverse was that the Bot was 'trained' to become 'racist', but as the New York Times article reveals - the system was not filtering the type of responses that it came out with, so people tried to get it to say crazy stuff. It worked. Oops!

After brief experiences with Chatroulette, I have to agree that this was to be expected if the 'Bot was programmed to respond with no filters. Other systems, such as Siri, or Amazon Echo do not respond to 'inappropriate content'.

We were experimenting recently on a long tech support call, because it has been said that swearing while waiting on hold will result in the call being bumped up in the queue. I can verify that playing a video of swearing Parrots into the phone did not cause any reduction on on-hold time with Sage Software. (I hung up after two hours.)