Over the coming decades, humans are going to be coming into more and more everyday social contact with robots. For most of their jobs, it’s straightforward enough to program the robot to perform the task. However, getting humans to actually like a robot is a whole other challenge.
A team of robotics scientists believes they have found the way to a human's heart: faults, errors, and awkwardness.
Their new study, published in the journal Frontiers in Robotics and AI, shows that people take a significantly stronger liking to robots who behave awkwardly and make errors, opposed to those goody-two-shoes robots that interacted flawlessly.
"Our results showed that the participants liked the faulty robot significantly more than the flawless one. This finding confirms the Pratfall Effect, which states that people's attractiveness increases when they make a mistake," Nicole Mirnig, a PhD candidate at the Center for Human-Computer Interaction, University of Salzburg, Austria, said in a statement.
Just think of all the robots who have warmed the cold hearts of the Internet in the past few years: Steve, the poor security robot that threw itself into a fountain last month, the Russian robot that supposedly escaped its research facility, the hitchhiking robot that was decapitated and left in a ditch (image above), or the falling robots of DARPA's Robotics Challenge (video below). All of these share a uniquely human vulnerability and clumsiness we can relate to on some level.
To discover what humans liked in a robot, the researchers made robots interact with the humans and then complete a couple of LEGO building tasks. After the interaction, they asked the humans to rate the robot’s anthropomorphism, likeability, and perceived intelligence. They also watched out for the participants' reaction when the robot made a mistake.
“Laughter [is a] typical reaction to unexpected robot behavior,” the study noted.
Of course, being a flailing robotic idiot isn’t always going to be helpful in practical tasks. However, the researchers say that robots could be taught (or even learn themselves) that these "errors" are something that could benefit them. If robots are able to pick up on human social cues and modify their own behavior accordingly, they could master social intelligence, just like us.
"Specifically exploring erroneous instances of interaction could be useful to further refine the quality of human-robotic interaction," Mirnig added. "For example, a robot that understands that there is a problem in the interaction by correctly interpreting the user's social signals could let the user know that it understands the problem and actively apply error recovery strategies."