AI Gone Wrong

tay_ai-large_transahfvc2wzbx_v7bq2hcchd_o-hc_vvkse7ijjuodhoruI recently came across an incredible news article in which it stated that just days after an Artificial intelligence chat robot was introduced to Twitter as an experiment, it had to be taken down as the experiment had gone horribly wrong. “‘Tay’, an AI modeled to speak ‘like a teen girl’, in order to improve (Microsoft’s) customer service on their voice recognition software.” was an AI who was able to chat with twitter users and learn from these interactions. She soon, however, became a ‘Bush did 9/11’, hate spewing, Nazi loving, incest promoting robot.

The main cause for her abrupt transition is attributed to internet trolls who simply wanted to fool around with the technology and see how much she was actually able to be influenced by these reactions. The end result in a way speaks for itself, but it’s an interesting argument in relation to the conversation we had in our class about the film her. What makes it more interesting was the designated personality of the teen girl assigned to it, and the representation of language assigned to her, and how this language ended up being used in the AI’s hate spewing tweets. For example, in reply to some of the ‘hate’ she was receiving, she encouraged her followers to ‘chill’, and then proceeded to support Hitler, Donald Trump, and much more. tay tweets

With our discussion of the post human and the future of Artificial Intelligence being unknown, this experiment provides a brief glimpse at the potential it could have for both good and bad. It could be beneficial for an AI to be able to learn behaviors and information in order to be of assistance and promote positive ideals, however if the “trolls” are able to get their hands on these bots and control or persuade their way of thinking, could it be possible that our future’s might start looking a bit like the film I, Robot? There is no way to tell for certain. However, one thing is certain; internet trolls can get up to some insane pranks, and not eve huge companies like Microsoft are safe from their wrath.

1 thought on “AI Gone Wrong

  1. As much as I love this story (and I love this story so much) I’ve noticed that people are freaking out a bit online because of it and what they think it means for the future of AI. Some people seem really concerned that this is going to mean AI are going to be easily as manipulated as Tay was, but what I think this really shows is the importance that we are going to have to place on educating these freshly conscious things. Just like children today AI are going to need to be taught morals and it is probably best if we dont let 4chan and other trollish communities collectively parent them like how Microsoft let it go down with Tay. I really believe that this poses a cool opportunity for people to think about ways in which we can properly educate AI to be kind and ethical posthumans. I remember reading about how some AI researchers were thinking about using nursery rhymes and fairy tales to give AI a basic human morality and thought that was pretty cool (although we may want be be careful which fairy tales we use). Another thing to consider when thinking about what happened to Tay and the dangerous implications it could have with respect to the future of AI is that if poop starts to the nazi-flag waiving, conspiracy theory spewing, feminist hating fan we can always (at least in the beginning) turn them off, scrub their hard drives and start them up again. Whether or not that is ethical is a different matter though.

Leave a comment