How a $300 projector can fool Teslas Autopilot

 arstechnica.com  01/28/2020 13:00:15   Jim Salter
This image, taken from the interior of a Tesla Model X, shows a projected image of a car in front of the Model X. The inset in the bottom right, created by Nassi from the Model X's logs, shows the Model X detecting the projection as a real car.
Enlarge / This image, taken from the interior of a Tesla Model X, shows a projected image of a car in front of the Model X. The inset in the bottom right, created by Nassi from the Model X's logs, shows the Model X detecting the projection as a real car.

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experimentalso successfullywith confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driverin fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

  • This is a frame from an ad you might see on a digital billboard, with a fake speed-limit sign inserted. It's only present for an eighth of a second, and most drivers would miss itbut AI image recognition recognizes it.
  • Humans wouldn't fall for a fake road sign projected into tree leaves. But AI image recognition generally will.
  • Humans would definitely notice these projected lane markers but would be unlikely to honor them. The Autopilot in a Tesla Model X took them as legit and swerved to follow them.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called "Phantom of the ADAS," and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don't necessarily agree with the spin Nassi puts on his workfor the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.

Nassi and his team's spoofing of the Model X was carried out with a human assistant holding a projector, due to drone laws in the country where the experiments were carried out. But the spoof could have also been carried out by drone, as his earlier spoofing attacks on a Mobileye driver-assistance system were.

From a security perspective, the interesting angle here is that the attacker never has to be at the scene of the attack and doesn't need to leave any evidence behindand the attacker doesn't need much technical expertise. A teenager with a $400 drone and a battery-powered projector could reasonably pull this off with no more know-how than "hey, it'd be hilarious to troll cars down at the highway, right?"The equipment doesn't need to be expensive or fancyNassi's team used several $200-$300 projectors successfully, one of which was rated for only 854x480 resolution and 100 lumens.

This is the full "Phantom of the ADAS" video. The effect of projected lane markers on a Model X in Autopilot mode, at 2:02, are particularly interesting.

Of course, nobody should be letting a Tesla drive itself unsupervised in the first placeAutopilot is a Level 2 Driver Assistance System, not the controller for a fully autonomous vehicle. Although Tesla did not respond to requests for comment on the record, the company's press kit describes Autopilot very clearly (emphasis ours):

Autopilot is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time. While Autopilot is designed to become more capable over time, in its current form, it is not a self-driving system, it does not turn a Tesla into an autonomous vehicle, and it does not allow the driver to abdicate responsibility. When used properly, Autopilot reduces a driver's overall workload, and the redundancy of eight external cameras, radar and 12 ultrasonic sensors provides an additional layer of safety that two eyes alone would not have.

Even the name "Autopilot" itself isn't as inappropriate as many people assumeat least, not if one understands the reality of modern aviation and maritime autopilot systems in the first place. Wikipedia references the FAA's Advanced Avionics Handbook when it defines autopilots as "systems that do not replace human operators, [but] instead assist them in controlling the vehicle." On the first page of the Advanced Avionics Handbook's chapter on automated flight control, it states: "In addition to learning how to use the autopilot, you must also learn when to use it and when not to use it."

Within these constraints, even the worst of the responses demonstrated in Nassi's videothat of the Model X swerving to follow fake lane markers on the roaddoesn't seem so bad. In fact, that clip demonstrates exactly what should happen: the owner of the Model Xconcerned about what the heck his or her expensive car might dohit the brakes and took control manually after Autopilot went in an unsafe direction.

The problem is, there's good reason to believe that far too many drivers don't believe they really need to pay attention. A 2019 survey demonstrated that nearly half of the drivers polled believed it was safe to take their hands off the wheel while Autopilot is on, and six percent even thought it was OK to take a nap. More recently, Sen. Edward Markey (D-Mass.) called for Tesla to improve the clarity of its marketing and documentation, and Democratic presidential candidate Andrew Yang went hands-free in a campaign adjust as Elon Musk did before him, in a 2018 60 Minutes segment.

The time may have come to consider legislation about drones and projectors specifically, in much the same way laser pointers were regulated after they became popular and cheap. Some of the techniques used in the spoofing attacks carried out here could also confuse human drivers. And although human drivers are at least theoretically available, alert, and ready to take over for any confused AI system today, that won't be the case forever. It would be a good idea to start work on regulations prohibiting spoofing of vehicle sensorsbefore we no longer have humans backing them up.

« Go back