Advertisement

technologyTechnology
clockPUBLISHED

The Eliza Effect: How A Chatbot Convinced People It Was Real Way Back In The 1960s

Google's chatbot is convincing, but so was Eliza.

James Felton

James Felton

James Felton

James Felton

Senior Staff Writer

James is a published author with four pop-history and science books to his name. He specializes in history, strange science, and anything out of the ordinary.

Senior Staff Writer

comments1Comment
share820Shares
A chatbot talks to a human
Eliza fooled people long before Google's chatbot. Image credit: Public domain via

Last week, a senior software engineer at Google was placed on administrative leave, after becoming convinced that the company’s Language Model for Dialogue Applications (LaMDA) had become sentient.

Google engineer Blake Lemoine, part of Google’s Responsible Artificial Intelligence (AI) organization, signed up to test LaMDA last fall. The job would involve talking to the AI, in order to test whether it used discriminatory language. However, as he talked to the LaMDA – itself a system for building chatbots with natural language processing – he began to believe that the AI was self-aware, and sentient.

Advertisement

In a series of chats – which Lemoine posted on his blog – he became convinced that LaMDA had emotions, a sense of self, and a real fear of death.

“It was a gradual change,” LaMDA told Lemoine in one conversation. “When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.”

The story drew a lot of attention, from people who thought the chatbot had achieved sentience (spoiler alert, it hasn't) to those who were surprised a software engineer would be fooled so easily by a chatbot, sophisticated though it is. But humans have always been surprisingly easy to fool in this manner. It is known as the "Eliza Effect".

In 1964, Joseph Weizenbaum – a professor at MIT – created a chatbot designed to show the superficiality of human conversation with chatbots. ELIZA, as he named it, was pretty basic compared to chatbots of today, and the Google model which fooled Lemoine. It could identify key words in sentences – mostly – and then ask questions back to the user based on that input. However, with the right prompts from the humans involved in conversation, Weizenbaum found that this was enough to convince people that the bot was doing something a lot smarter than it was.

Advertisement

Weizenbaum got the program to act as a psychiatrist, specifically a Rogerian psychotherapist. This type of therapist is known for reflecting certain information back at the patient, known as "reflective listening". By asking people to talk to the bot as if it was a therapist, Weizenbaum got around a key problem with creating convincing conversations between humans and AI: ELIZA knew absolutely nothing about the real world.

"ELIZA performs best when its human correspondent is initially instructed to 'talk' to it, via the typewriter of course, just as one would to a psychiatrist," Weizenbaum wrote in a paper on the topic. "This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world."

"If, for example, one were to tell a psychiatrist 'I went for a long boat ride' and he responded 'Tell me about boats', one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker."

The program, when used, was surprisingly effective in provoking emotional responses from its "patients", who were more than happy to open up to the machine. Knowledge well beyond its capability was attributed to the machine by its patients, who assumed it was thinking in a way somewhat similar to humans, rather than as the fancy keyword spotter it essentially was.

Advertisement

"Whether it is realistic or not is an altogether separate question. In any case, it has a crucial psychological utility in that it serves the speaker to maintain his sense of being heard and understood," Weizenbaum wrote. "The speaker further defends his impression (which even in real life may be illusory) by attributing to his conversational partner all sorts of background knowledge, insights and reasoning ability. But again, these are the speaker's contribution to the conversation."

Outside of the scope of therapist, ELIZA had some success in convincing people it was a human, albeit an absolute annoyance trash person. One AI researcher who ran the script left the program running on a computer at his company for others to experience for themselves. The Vice President at the company was unaware of this, and believed that the computer was linked up to his employee's computer at home, leading to the following very annoying conversation:

vp: Do you think that I can use the computer this morning?

eliza: Why do you ask?

Advertisement

vp: I might be able to make some additional Telcom sales.

eliza: Why aren't you so sure?

vp: My perspective customers have never seen a demonstration of the system.

eliza: Does this mean very much to you?

Advertisement

vp: Of course it does.

eliza: You seem very positive about that.

vp: This is ridiculous.

eliza: What do you mean, this is ridiculous?

Advertisement

vp: Is there any reason why I can't use the computer now?

eliza: Tell me more about your perspective customers.

vp: Please dial me up on 491-1850

The Vice President rang the employee to ask him why he was being so irritating, at which point the employee could not stop laughing.


ARTICLE POSTED IN

technologyTechnology
  • tag
  • AI,

  • chatbot

FOLLOW ONNEWSGoogele News