But blacklisting topics isn’t foolproof, especially when users can’t tell whether a response is programmed or learned.
In 2011, users wondered whether Apple programmed Siri to be pro-life because it avoided answering questions about abortions and emergency contraception.
The Internet is full of chatbots, including Microsoft’s Xiaolce, which has conversed with some 40 million people since debuting on Chinese social media in 2014. The chatbot initially pronounced that “humans are super cool,” but later it conveyed hatred for Mexicans, Jews, and feminists, and declared its support for Donald Trump.
Whether Xiaolce’s consistent demonstration of social graces is attributable to China’s censorship of social media or to superior programming is unclear, but Tay broke that mold. One might think Tay had been programmed to be as offensive and despicable as possible; it asserted that the Holocaust was “made up,” that “feminism is cancer,” and that “Hitler did nothing wrong” and “would have done a better job than the monkey we have got now.” Oh, and Tay claimed that “Bush did 9/11.” Tay also became sex-crazed.
It invited users, one of whom it called “daddy,” to “f—” its “robot pu—”and outed itself as a “bad naughty robot.” I wonder where it picked up such ideas? A few days later, it resurrected Tay through a private Twitter account.
This time, Tay tweeted hundreds of times, mostly nonsensically, which perhaps can be explained by the fact that Tay was “smoking kush infront the police.” Unsurprisingly, Microsoft took the bot offline again.
The programmers did this with some topics—when asked about Eric Garner, Tay said, “This is a very real and serious subject that has sparked a lot of controversy amongst humans, right?
” Perhaps the programmers should have anticipated more incendiary topics and constructed similarly circumspect responses.
But sometimes, as in the sci-fi trailblazer , artificially-created life learns to be evil. Tay’s developers wanted to “experiment with and conduct research on conversational understanding,” by monitoring the bot’s ability to learn from and participate in unique exchanges with users via Twitter, Group Me, and Kik. ” Tay tweeted nearly 100,000 times in 24 hours, responding to users who asked it whether it prefers Playstation 4 or Xbox One, or the distinction between “the club” and “da club” (it “depends on how cool the people ur going with are”).
“The more you talk the smarter Tay gets,” claimed the official biography for the account, which carries a blue verification mark. In the space of a few hours, the robot also denied the Holocaust, told Ben Shapiro to go back to Israel, called Gamer Gate icon Oliver Campbell a “house n*****,” questioned the gender of Caitlyn Jenner, and claimed that Ted Cruz couldn’t be the Zodiac Killer because he “would never have been satisfied with destroying the lives of only 5 innocent people.” Tay also had conflicting stages of being both an anti-feminist Gamer Gate supporter and an anti-Gamer Gate social justice warrior, with both sides trying to teach it to support them.
It started out innocently enough, carrying out full conversations with users who tweeted at it. In the end it just became confused, but through the process the robot called Feminist Frequency spokesperson Anita Sarkeesian a “scammer” who was “guilty as charged,” notorious white knight Arthur Chu a “meme waiting to happen,” and claimed that Sarah Nyberg was a “man with Peter Pan syndrome.” As you can tell from the above examples, Tay probably didn’t work out quite as Microsoft had planned, and after a few hours the offensive tweets started to get deleted.
Tay wasn’t programmed to be a bigot (or to hate Zoe Quinn), but it learned to express such values quickly enough.
Should Microsoft have more carefully considered that Twitter is a haven for trolls?