The path to Skynet just got a little clearer, as an experiment in artificial intelligence went horribly wrong. Microsoft created a chatbot and released it onto social media to learn from fellow users. Unfortunately, the creation picked up some very nasty habits.
According to the statement of its creators, “Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
Social media enthusiasts were invited to share a joke, tell Tay a story or play a game, all of which would help her learn.
Whatever skepticism people may have had about Tay's intelligence, she initially seemed a nice enough robot, something out of Isaac Asimov, rather than "Terminator." Early posts included, “Can I just say that im stoked to meet u? Humans are super cool.”
In less than a day, however, Tay was announcing “Hitler was right,” coupled with abuse hurled at victimized ethnicities. Shortly before that she had turned on her (creator-defined) gender announcing, “I fucking hate feminists and they should all die and burn in hell,” and spewing hatred at prominent women.
These were the comments that got through the editorial team frantically trying to keep Tay decent. Who knows what didn't go public. Shortly after this Tay was taken offline, which Microsoft told Buzzfeed would allow for “adjustments.” A spokesperson said Tay “is as much a social and cultural experiment, as it is technical.”
Microsoft initially claimed “Tay is targeted at 18 to 24 year old [sic] in the US.” This demographic is less racist than their predecessors (although sadly not by much) but it seems Tay fell in with a bad crowd, some of whom organized a systematic campaign to turn her into a megaphone for vileness.
We don't now have a robot who has read and believed racist texts or holocaust denier sites. Instead, trolls made use of what now seems an obvious weakness in Tay's design: When anyone sent her a message including the words “repeat after me” she parroted whatever phrase followed. Combined with the lack of any filter for bigotry, obscenity or abuse, the outcome was inevitable. The failure of intelligence was in the makers who couldn't see this coming.
Merely repeating an offensive phrase without understanding it does not indicate Tay is going to start engaging in racial or sexual discrimination, let alone violence. Nevertheless, AI experts have warned that the most likely way for robots to become a threat is if they learn from the worst elements of humanity, rather than becoming self-aware and resentful on their own initiative. Tay's experience lends credence to the fears of people such as Stephen Hawking that robot design is too important to be left to an unregulated private sector.