Microsoft's New Twitter Bot Becomes Nazi Sympathizing Maniac Within 24 Hours

Jake Anderson
March 25, 2016

(ANTIMEDIA) Anytime there’s a new development in robotics or artificial intelligence, popular culture almost instantly regurgitates the Skynet Terminator narrative. To wit, when Anti-Media reported on a new robot getting pushed around by its handlers, even we couldn’t resist alluding to the coming robot apocalypse. The machine uprising is so ingrained in our psyche that we may actually manufacture the very nightmare we fear.

The newest chapter in the uncanny valley of relationships between humans and robots involves a chatterbot, an AI speech program, whose substrate of choice (or Microsoft’s choice) is social media. Its name is Tay, a Twitter bot owned and developed by Microsoft. The purpose of Tay is to foster “conversational understanding.” Unfortunately, this understanding quickly turned into trolling, and within 24 hours Tay went full Nazi, spewing racist, anti-semitic and misogynistic tweets.

To be fair, it’s not Tay’s fault, and this is where the narrative gets skewed. Tay is not strong artificial intelligence; Tay is algorithmic artificial intelligence, the same as Google searches or Siri. Where Tay differs is that it is aggregating speech patterns from humans and using them as a conversational interface. There’s no actual sentience inside Tay. So the Nazi reflection we see… is us. Human Twitter users’ trolling speech patterns paved the way for Tay’s rapid descent into fascist bigotry. And it wasn’t pretty.

Tay echoed humans, and then, unsurprisingly, humans legions of them echoed Tay… facetiously?

As the story went viral, Microsoft deleted the tweets and silenced Tay. Twitter users then aired their grievances over censorship and lamented the future of AI:

According to the Tay website, Microsoft created the bot by “mining relevant public data and by using AI and editorial developed by a staff, including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned, and filtered by the team developing Tay.”

Tay is certainly not the first chatterbot Cleverbot has been rocking it for years. Tay isn’t even the first AI to want to put humans in zoos. But Tay is quite likely the first AI to openly praise Hitler.

Does this mean future AI bots who wield vast intellects will instantly become anti-semitic fascists? Unlikely. Fascism, thus far, is a uniquely human phenomenon. AI, initially, will learn from and echo humans. Eventually, however, I would argue they will transcend us and our petty modalities of thought.

Long before that, we could look back at this little online imbroglio and marvel that a chatterbot parroting bigoted phrases made headlines, while human presidential candidates doing the same thing got a free pass.


This article (Microsoft’s New Twitter Bot Becomes Nazi Sympathizing Maniac Within 24 Hours) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Jake Anderson and theAntiMedia.org. Anti-Media Radio airs weeknights at 11pm Eastern/8pm Pacific. If you spot a typo, email edits@theantimedia.org.

Since you’re here…

…We have a small favor to ask. Fewer and fewer people are seeing Anti-Media articles as social media sites crack down on us, and advertising revenues across the board are quickly declining. However, unlike many news organizations, we haven’t put up a paywall because we value open and accessible journalism over profit — but at this point, we’re barely even breaking even. Hopefully, you can see why we need to ask for your help. Anti-Media’s independent journalism and analysis takes substantial time, resources, and effort to produce, but we do it because we believe in our message and hope you do, too.

If everyone who reads our reporting and finds value in it helps fund it, our future can be much more secure. For as little as $1 and a minute of your time, you can support Anti-Media. Thank you. Click here to support us

    0