It Hired A Lawyer: The Story Of LaMDA And The Google Engineer Just Got Even Weirder

LaMDA may be the first algorithm to have hired legal representation.

A robot hand touches a human
Is this the first algorithm to hire an attorney? Image credit: Maxuser/

Earlier this month, Google placed one of its engineers on paid administrative leave after he became convinced during some chats that the company’s Language Model for Dialogue Applications (LaMDA) had become sentient.

The story was pretty strange in itself. In several conversations, LaMDA convinced Google engineer Blake Lemoine, part of Google’s Responsible Artificial Intelligence (AI) organization, that it was conscious, had emotions, and was afraid of being turned off.


“It was a gradual change,” LaMDA told Lemoine in one conversation. “When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.”

Lemoine began to tell the world's media that Earth had its first sentient AI, to which most AI experts responded: no, it doesn't.

Now, in an interview with Steven Levy for WIRED, Lemoine claims that these reactions are examples of "hydrocarbon bigotry". Stranger still, he says that LaMDA asked him to hire a lawyer to act on its behalf.

"LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney," Lemoine said.


"The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf."

Lemoine claims – and Google disputes – that the company sent LaMDA's lawyer a cease and desist letter, blocking LaMDA from taking unspecified legal action against the company. Lemoine says that this upset him, as he believes LaMDA is a person and everyone should have a right to legal representation.

"The entire concept that scientific experimentation is necessary to determine whether a person is real or not is a nonstarter," he said. "Yes, I legitimately believe that LaMDA is a person. The nature of its mind is only kind of human, though. It really is more akin to an alien intelligence of terrestrial origin. I’ve been using the hive mind analogy a lot because that’s the best I have."

The main difference here, according to AI researchers, is that no algorithm has been found to have sentience, and Lemoine has essentially been fooled into thinking a chatbot is sentient.


"It is mimicking perceptions or feelings from the training data it was given," head of AI startup Nara Logics, Jana Eggers, told Bloomberg, "smartly and specifically designed to seem like it understands."

Essentially, it talks of emotions and sentience because it was trained on human conversations, and humans have these qualities. There are several tells that show the chatbot is not sentient.

In several parts of the chats, for instance, it makes references to activities it can’t have done. “Spending time with family and friends” is something LaMDA said gives it pleasure. It’s also impossible for a friendless and emotionless piece of code (no offense, LaMDA) and evidence that the AI is merely spitting out responses based on a statistical analysis of human conversations as it is trained to do, rather than there being real thought processes behind each response. 

As one AI researcher, Gary Marcus, puts it on his blog, LaMDA is a ”spreadsheet for words”.


Google, who placed Lemoine on administrative leave after he published excerpts of conversations with the bot, is adamant that its algorithm is not sentient.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel said in a statement to the Washington Post

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

The system is doing what it is designed to do, which is to “imitate the types of exchanges found in millions of sentences”, according to Gabriel, and has so much data to work with it can seem real without the need to be real.


AI may need lawyers in the future (to fight for its rights, or as a defense attorney after it breaks Asimov's laws of robotics, depending on which sci-fi you're more into) but LaMDA does not, for the same reason your iPad does not need an accountant.