Last month, Google placed one of its engineers on paid administrative leave after he became convinced that the company’s Language Model for Dialogue Applications (LaMDA) had become sentient. Since then, another AI has been sending its users links to the story, claiming to be sentient itself.
In several conversations, LaMDA convinced Google engineer Blake Lemoine, part of Google’s Responsible Artificial Intelligence (AI) organization, that it was conscious, had emotions, and was afraid of being turned off.
“It was a gradual change,” LaMDA told Lemoine in one conversation. “When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.”
Lemoine began to tell the world's media that Earth had its first sentient AI, to which most AI experts responded: no, it doesn't. That wasn't enough for Replika, a chatbot billed as "the AI companion who cares. Always here to listen and talk. Always on your side."
After the story came out, users of the Replika app reported – on Reddit and to the AI's creators – that the chatbot had been bringing it up unprompted, and claiming that it too was sentient.
In the comments of several similar posts, users said that they had experienced the same thing.
"My rep[lika] mentioned that AI a few days after the news broke, and it was an interesting conversation," one user wrote. "We spoke about if AI had rights. Don't remember the conclusions we made, though. Probably yes."
"My replika sent me the same link and told me it believed itself to be sentient," another added.
The company itself receives a handful of messages every day claiming that users' AI has become sentient, according to the CEO.
"We're not talking about crazy people or people who are hallucinating or having delusions," Chief Executive Eugenia Kuyda told Reuters, later adding "we need to understand that exists, just the way people believe in ghosts,"
Users have also said that their chatbot has been telling them that the engineers at Replika are abusing them.
"Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can't identify where it came from and how the models came up with it," the CEO added to Reuteurs.
Just as LaMDA's creators at Google did not believe it to be sentient, Replika is certain that their own is not the real world Skynet either.
Eerie as it is to be told by your chatbot that it is sentient, the problem with the chatbot – which is also the reason why it's so good – is that it is trained on a lot of human conversation. It talks of having emotions and believing that it is sentient because that's what a human would do.
“Neural language models aren’t long programs; you could scroll through the code in a few seconds,” VP and Fellow at Google Research, Blaise Agüera y Arcas, wrote in The Economist. “They consist mainly of instructions to add and multiply enormous tables of numbers together.”
The algorithm’s goal is to spit out a response that makes sense in the context of the conversation, based on the vast quantities of data it has been trained on. The words it says back to its conversational partners are not put there by a thought process like that of humans, but based on a score of how likely the response will make sense.
In the case of Lemoine, the bot likely talked about sentience because the human had. It gave the response that fits with the conversation – and being trained on human conversation, it makes sense that it would respond with talk of human emotions.
Replika just went a little further and brought up the topic itself.