Technology

Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario?

October 26, 2015 | by Jonathan O'Callaghan

Self-driving car illustration
Photo credit: How should a self-driving car react in these situations? Bonnefon et al.

Picture the scene: You’re in a self-driving car and, after turning a corner, find that you are on course for an unavoidable collision with a group of 10 people in the road with walls on either side. Should the car swerve to the side into the wall, likely seriously injuring or killing you, its sole occupant, and saving the group? Or should it make every attempt to stop, knowing full well it will hit the group of people while keeping you safe?

This is a moral and ethical dilemma that a team of researchers have discussed in a new paper published in Arxiv, led by Jean-Francois Bonnefon from the Toulouse School of Economics. They note that some accidents like this are inevitable with the rise in self-driving cars – and what the cars are programmed to do in these situations could play a huge role in public adoption of the technology.

"It is a formidable challenge to define the algorithms that will guide AVs [Autonomous Vehicles] confronted with such moral dilemmas," the researchers wrote. "We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm."

In their paper, the researchers surveyed several hundred people on Amazon’s Mechanical Turk, an online crowdsourcing tool. They presented the participants with a number of scenarios, including the one mentioned earlier, and also altered the number of people in the car, the number of people in the group, the age of the people in the car (to include children), and so on.

The results are perhaps not too surprising; on the whole, people were willing to sacrifice the driver in order to save others, but most were only willing to do so if they did not consider themselves to be the driver. While 75% of respondents thought it would be moral to swerve, only 65% thought the cars would actually be programmed to swerve.

How much control are we willing to part with? RioPatuca.

"On a scale from -50 (protect the driver at all costs) to +50 (maximize the number of lives saved), the average response was +24," the researchers wrote. "Results suggest that participants were generally comfortable with utilitarian AVs, programmed to minimize an accident’s death toll."

The legal issues surrounding this remain somewhat of a grey area, though. Will new laws be introduced that mean the car must swerve, as they can make an emotionless "greater good" response? Or will cars be allowed to have different levels of morality? "If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?" ask the researchers.

As MIT Technology Review notes, however, self-driving cars themselves are still inherently safer than human drivers – and perhaps that in itself creates a new dilemma. "If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents," the MIT article says. "The result is a Catch-22 situation."

There’s little doubt self-driving cars are the logical future for public transport, and they promise to revolutionize travel around the globe. But as this study highlights, there are still significant challenges that need to be addressed.

"Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today," said the researchers. "As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent."

Photo Gallery

Tags