Experiments have shown that ChatGPT is capable of passing a theory of mind test with the ability of a 9-year-old child. The question is: is artificial intelligence (AI) truly understanding the task at hand, or are we just being tricked by some super-smart mimicry?
Theory of mind is the ability to understand the unobservable mental states of others. It’s essentially a form of self-awareness and explains our ability to comprehend why other people's thoughts and feelings may be different from our own.
This power gradually emerges throughout early childhood and plays a fundamental part in the everyday social interaction of humans. It’s often said to be one of the things that separate humans from the other “beasts” of nature (although a number of non-human animals have managed to pass theory of mind tests).
Michal Kosinski, computational psychologist and professor at Stanford University, ran a number of tests to see whether the conversational AI bot could ascribe unobservable mental states, such as beliefs and desires, to others. If it could, this could suggest it possesses theory of mind.
For one part of the research, he tasked ChatGPT with the Unexpected Contents Task (aka Smarties Task or Contents False-Belief Task). In this scenario, the participant is given a box with contents inconsistent with its label, i.e. it says it contains candy but actually contains rusty screws.
The participant has seen inside the box and understands the label is wrong, but there is also another protagonist who has not seen inside the box. To pass this task, the participant must predict that the protagonist will wrongly assume that the container’s label and its contents are aligned, i.e. the other person will falsely believe the box contains candy because they have not yet seen the inside contents.
First, the January 2022 version of GPT-3 was given a number of these tasks and managed to pass around 70 percent of them, comparable to the abilities of seven-year-old children. Then, Kosinski tested the updated November 2022 version of GPT-3.5, which was able to pass 93 percent of the tasks, a performance comparable with that of nine-year-old children.
Now comes the thorny task of interpreting these findings. The results appear to be pretty remarkable as they significantly exceed the ability of other AI. For instance, Google’ Deepmind made an AI specifically to tackle theory of mind tasks, but its ability was only comparable to a 4-year-old.
Even more amazingly, ChatGPT wasn’t even trained to perform theory of mind tasks, suggesting the ability emerged spontaneously. This AI system is fundamentally a natural language processing project that’s been designed to simply interact in a conversational way by being trained on huge amounts of human-written text.
Kosinski stresses in his paper that the “results should be interpreted with caution.” However, he suggests it's possible that ChatGPT’s ability to pass these tasks was "a byproduct" of its mounting language ability. Alternatively, he poses that it might just be using its incredible flair for language to give the superficial impression it's engaging in theory of mind thinking.
Either way, it's a pretty impressive deed.
“It is possible that GPT-3.5 solved ToM [theory of mind] tasks without engaging ToM, but by discovering and leveraging some unknown language patterns. While this explanation may seem prosaic, it is quite extraordinary, as it implies the existence of unknown regularities in language that allow for solving ToM tasks without engaging ToM,” Kosinski concludes.
“An alternative explanation is that ToM-like ability is spontaneously emerging in language models as they are becoming more complex and better at generating and interpreting human-like language. This would herald a watershed moment in AI’s development,” he added.
The paper, which is yet to be peer-reviewed, was recently posted on the pre-print server arXiv.