Authorities in the US are investigating a fatal incident involving a semi-autonomous car earlier this year. Details of the tragic event have just been disclosed to the public.
In Florida on May 7, The driver of the Tesla Model S car, Joshua Brown, engaged the vehicle’s autopilot, which is able to guide itself along roads, react to traffic, and change lanes, all without the aid of the human driver-turned-passenger.
Unfortunately, it failed to distinguish between the bright white sky and the white paint of a tractor-trailer. Attempting to drive at full speed underneath it, the top of the vehicle was torn off by the force of the collision as the Model S windshield careened into the bottom of the trailer.
Although the driver of the truck was uninjured during the collision, Brown was killed, and the National Highway Traffic Safety Administration (NHTSA) has opened an inquiry into the incident.
The last time a self-driving car was involved in a collision was when one of Google’s autonomous prototype vehicles slowly bumped into a bus, which caused little damage to the vehicles and injured no one. This new incident, clearly, is far more serious, and marks a dark day for the self-driving car initiative spearheaded by both Google and Tesla.
“This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles,” Tesla writes on a blog post on their own website, making its views clear on the safety of self-driving cars compared to human drivers.
“It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations,” the Tesla team add, before mentioning that the autopilot feature is still in its beta testing phase, and that “as more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing.”
Traffic accidents are mostly caused by human error. Dmitry Kalinovsky/Shutterstock
Tesla isn’t wrong when it claims that self-driving cars are safer than human-driven variants. In 2010, there were 2.24 million injuries due to motor vehicle accidents in the US alone, and human error was largely to blame. During these accidents, 35,332 lives were lost, and self-driving cars would certainly reduce this number to almost zero.
However, this new incident reveals what happens when there’s a glitch in the onboard computer software that the programmers hadn’t considered. Brown could have grabbed the wheel and overridden the autopilot if his reactions were quick enough, and some may use this case to call for autonomous cars to always have a human “kill switch” to prevent this type of accident from happening in the first place.
Some argue, though, that giving humans control over a car that is already swerving to avoid a crash may inadvertently cause the crash to happen.
Things get even murkier when you consider that the autonomous car has to make a moral choice. What if it was barreling towards a crowd of pedestrians, and it had to pick between ploughing through them or swerving dramatically out of the way, saving them but crashing itself – and its passenger – into a wall or another car?
A Tesla enthusiast, Brown was known for taking his hands off the wheel and letting the autopilot feature do its thing. Joshua Brown via YouTube
It’s a difficult issue, and one that still hasn’t been resolved. Fortunately, this type of scenario has yet to happen in real life, but it will cause a legal maelstrom when it inevitably does.
For now, the legal issues surrounding this tragic incident will prove difficult for Tesla to handle. After all, who exactly is responsible for the accident – the driver who engaged the autopilot, the autopilot itself, or the Tesla programmers that didn’t foresee this very specific event occurring in the first place?