Hackers Make Tesla Autopilot Swerve Into Oncoming Traffic Using Just Stickers

Keen Lab

Automated cars have a hill to climb if they want to win over people's trust. A Reuters poll released yesterday revealed that half of US adults think self-driving vehicles are more dangerous than conventional vehicles, whilst a poll in March found that 73 percent of Americans say they are too afraid to ride in a fully-automated vehicle, up from 49 percent in 2017.

If you're one of these people, this story probably won't do anything to reassure you, but please try to remember this kind of problem being flagged up is a good thing, because it's a flaw that the industry now knows about and can try to fix. Tesla actually wants people to find these faults, and offers a "bug bounty" of a free Model 3 Tesla to anyone who finds a bug, in order to help make them safer.

Bear that in mind as you read this next bit.

Researchers at cybersecurity firm Keen Lab have managed to cause a Tesla's self-driving feature to swerve off course and into the wrong lane, using just a few stickers they placed on the road. 

In a paper published last week, researchers first tried to confuse the Tesla by blurring out markings on the left lane. The system easily recognized this, and the team concluded that "it is difficult for an attacker to deploy some unobtrusive markings in the physical world to disable the lane recognition function of a moving Tesla vehicle."

So far so good.

However, the researchers then attempted a "fake lane attack," with much better (depending on your perspective) results.

By placing just three stickers on the floor, they were able to trick the Tesla into moving into the opposite lane, and potentially directly at oncoming traffic.

Keen Lab

"Misleading the autopilot vehicle to the wrong direction with some patches made by a malicious attacker... is more dangerous than making it fail to recognize the lane," the researchers write in the paper.

"Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand, and dust interference), but it still doesn’t handle the situation correctly in our test scenario."

What's worrying about this attack is how easy it is for potential malicious attackers, as you wouldn't even have to connect to the vehicle physically or remotely. 

"This kind of attack is simple to deploy, and the materials are easy to obtain."

The team hopes that the potential defects they exposed can be dealt with by the manufacturers, helping them improve the stability and reliability of their automated car systems.

Comments

If you liked this story, you'll love these

This website uses cookies

This website uses cookies to improve user experience. By continuing to use our website you consent to all cookies in accordance with our cookie policy.