People who paid Tesla $3,000 for full self-driving might be out of luck

 arstechnica.com  4/17/2018 3:40:32 PM  2  Timothy B. Lee
Enlarge / Tesla CEO Elon Musk in 2015.

Tesla has an Autopilot problem, and it goes far beyond the fallout from last month's deadly crash in Mountain View, California.

Tesla charges $5,000 for Autopilot's lane-keeping and advanced cruise control features. On top of that, customers can pay $3,000 for what Tesla describes as "Full Self-Driving Capability."

"All you will need to do is get in and tell your car where to go," Tesla's ordering page says. "Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed."

None of these "full self-driving" capabilities are available yet. "Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction," the page says. "It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval."

But the big reason full self-driving isn't available yet has nothing to do with "regulatory approval." The problem is that Tesla hasn't created the technology yet. Indeed, the company could be years away from completing work on it, and some experts doubt it will ever be possible to achieve full self-driving capabilities with the hardware installed on today's Tesla vehicles.

"It's a vastly more difficult problem than most people realize," said Sam Abuelsamid, an analyst at Navigant Research and a former auto industry engineer.

Tesla has a history of pre-selling products based on optimistic delivery schedules. This approach has served the company pretty well in the past, as customers ultimately loved their cars once they ultimately showed up. But that strategy could backfire hugely when it comes to Autopilot.

Some experts doubt it’s possible to achieve full self-driving using Tesla’s hardware

A Google self-driving car, built on a modified Toyota Prius, combines information gathered from Google Street View with artificial intelligence software that gathers input from video cameras inside the car, a lidar sensor on top of the vehicle, radar sensors on the front of the vehicle and a position sensor attached to one of the rear wheels that helps locate the car's position on the map.
Enlarge / A Google self-driving car, built on a modified Toyota Prius, combines information gathered from Google Street View with artificial intelligence software that gathers input from video cameras inside the car, a lidar sensor on top of the vehicle, radar sensors on the front of the vehicle and a position sensor attached to one of the rear wheels that helps locate the car's position on the map.

The most obvious thing missing from Tesla's cars, from an autonomy perspective, is lidar. The companies that have made the most progress toward fully self-driving cars—including Waymo, Uber, and GM's Cruise—all have lidar on their cars.

Defying the industry consensus, Tesla CEO Elon Musk has repeatedly insisted that lidar is merely a "crutch" and that it's possible to build fully autonomous vehicles using only cameras and radar.

But most industry insiders believe lidar plays an important—and probably essential—role. Cameras offer high range and resolution, but they're not very good at estimating distances and they don't work as well in low-light conditions. Radar provides precise distance and velocity measurements but at very low resolution.

Lidar occupies a kind of sweet spot between these two: it offers precise distance measurements not available from cameras while producing a much higher-resolution map of the surrounding area than can be provided with radar. And unlike cameras, lidar works as well at night as it does in the daytime.

Of course, humans drive cars using the cameras we call our eyes, so there's no doubt it's possible to do in principle. But it took millions of years for evolution to develop our own, far from perfect, spatial navigation skills. The question is whether Tesla—or anyone else—will figure out how to develop better-than-human driving software within the next decade or two. Many experts believe that lidar sensors provide an extra margin of safety that will allow driverless cars to be introduced years earlier than would otherwise be possible.

And that's not the only deficiency of the "full self driving" hardware on Tesla vehicles, according Abuelsamid. "You have to have levels of redundancy that simply aren't there in those vehicles," Abuelsamid told Ars in an interview last week.

It's better to think of driver-assistance technologies and full self-driving cars as distinct systems

Cars from Waymo and Cruise have redundant main computers, redundant braking and steering systems, and redundant power supplies. This ensures that if any single component fails, a backup system will be ready to take over, allowing the car to gracefully come to a stop at the side of the road without causing a crash.

In contrast, a Model S teardown last August found that its vehicles had a single Nvidia Drive PX 2 board with one SoC and one GPU. If one of those components fails, the car might not be able to recover gracefully.

A final issue is raw computing power. When Tesla introduced the "full self-driving" feature in late 2016, the Nvidia Drive PX 2 was cutting-edge technology. But last year Nvidia introduced a next-generation PX platform, codenamed Pegasus, that Nvidia claims has 13 times the computing power of the PX 2. Since no one has built a fully self-driving car yet, it's not known how much computing power such a system will ultimately require. Those old Nvidia chips might not be enough.

The five-level framework may be leading Tesla astray

The Society of Automotive Engineers (SAE) has developed a five-level conceptual framework for thinking about autonomous vehicles, ranging from level 1 for basic cruise control to level 5 for a car that can operate autonomously in all situations. (There's also "level 0" to denote a car with no autonomous features.)

This framework encourages people to view driver-assistance systems and fully self-driving cars as two points on a continuous spectrum. That's also the implicit assumption of Tesla's Autopilot strategy. The company is selling a driver-assistance system today, but it plans to use software updates to gradually turn it into a fully self-driving system. But there's growing reason to think this is a mistake—that it's better to think of driver-assistance technologies and full self-driving cars as distinct systems.

In driver-assistance systems, a human driver is expected to pay attention 100 percent of the time and correct any mistakes the driver assistance system makes. In contrast, a fully self-driving system is built on the assumption that a human driver will never need to take over.

Systems in the middle—with human driver and software both sharing some responsibility—are a safety hazard. Once a self-driving system gets pretty good, humans start to trust it and stop paying attention to the road. This can happen long before the system is actually safer than a human driver, leading to more fatalities rather than fewer.

That's the conclusion Google reached several years ago when it shifted from building driver-assistance technology to building cars that are designed to be fully autonomous from the start. The company's current plan is to offer a driverless taxi service that won't even allow passengers to take the wheel.

The decision to design cars to be self-driving from the ground up has had a profound effect on the way Google—now Waymo—has approached the driverless car problem. Waymo realized that the key to making this work was a different kind of gradualism: fielding a taxi service that initially would only work on a limited number of streets and weather conditions. Waymo is planning to launch its service in the Phoenix area, which has some of the nation's best-maintained roads, lightest traffic, and least-challenging weather. Once Waymo is confident that the technology is working well in this forgiving environment, it will gradually expand service to other parts of the country.

A big advantage of this business model is that it gives Waymo a lot of flexibility to replace and upgrade the sensors and other equipment on its cars over time. If it discovers that its initial set of sensors isn't sufficient for fully self-driving operation, it can replace them with more powerful sensors—without worrying about charging customers for the upgrade. If it finds different kinds of sensors are needed to operate in snowy weather, it can install those on cars operating in snowy parts of the country. If it can't figure out how to get fully automated driving to work in a particular region, it can focus on expanding its taxi service to other regions first.

Waymo's technology depends heavily on gathering high-resolution maps of the areas where it's operating. The company can do this one city at a time, using revenues from early cities to finance later expansions.

Tesla's business model gives it much less flexibility. Launching a driverless car feature that initially only works in Phoenix would make every Tesla customer not located in Phoenix angry. But offering full self-driving capabilities nationwide might require a massively expensive effort to collect or purchase map data for the whole country.

If Tesla finds that its current hardware isn't sufficient for full self-driving operation, it will face an awkward choice between charging customers for an upgrade (after promising that the old hardware would be adequate) or paying those costs itself. If a sensor fails, Tesla will have to choose between disabling self-driving capability until the customer repairs it or allowing the car to continue operating with a higher risk of a crash.


Page 2

The landing page for Autopilot on the Tesla website.
Enlarge / The landing page for Autopilot on the Tesla website.

Meanwhile, the decision to blur together driver assistance and full self-driving capabilities makes it more difficult for Tesla to clearly communicate about the limits of its current driver-assistance technology. In the wake of last month's crash in Mountain View, Tesla has emphasized that drivers bear full responsibility for overseeing the operation of the Autopilot system—and need to have their hands on the wheel and their eyes on the road at all times.

Yet Tesla's official Autopilot webpage paints a very different picture. "Full Self-Driving Hardware on All Cars," the page boasts. Below that is a video of a Tesla driver cruising through a complex urban environment without touching the steering wheel.

Anupam Chander, a legal scholar at the University of California at Davis, argues that the mixed messages Tesla has been sending about Autopilot are "unbelievably irresponsible."

Tesla's story, of course, is that this headline and this video are referring to a future version of Autopilot that hasn't been released yet. But it's not hard to imagine how a customer might be confused. A customer might assume that "Full Self-Driving Hardware on All Cars" means that all Tesla cars have full self-driving capabilities. The distinction between hardware and software isn't obvious—especially since people aren't used to thinking about cars as computing platforms. Someone watching the video might naturally assume that it is showing the capabilities of current Tesla cars, since there's no disclaimer stating otherwise.

Tesla’s strategy of overselling might bite them here

The Tesla P100D at rest.
Enlarge / The Tesla P100D at rest.
A lot of Tesla's current difficulties can be traced back to the fact that Tesla is trying to sell self-driving cars using the traditional car industry business model. There's a reason a growing number of conventional car companies—including GM, Ford, Volkswagen, and Hyundai—are moving toward an on-demand rental model for their early self-driving cars.

But Tesla's unconventional sales tactics have made the problem worse. Tesla is still basically a cash-strapped startup. It has a history of not only setting optimistic deadlines for itself but raising capital by accepting advance payments from customers based on those optimistic deadlines. With all three of Tesla's recent cars—the Model S, Model XModel 3—Tesla accepted millions of dollars in cash deposits from customers long before the cars were actually for sale.

In all three cases, Musk's initial production schedule proved too optimistic. That meant Tesla wound up holding on to customer deposits longer than they expected. These payments helped to finance the capital-intensive process of actually manufacturing the cars. But Tesla eventually managed to produce enough Model S and Model X vehicles to satisfy demand, and most customers were ultimately happy with their cars.

Aggressively pushing the envelope here is one of the ways Tesla has accomplished what many people thought was impossible a decade ago: building a totally new mainstream automaker from scratch.

"Tesla was basically the original Kickstarter," Abuelsamid told Ars. "They've been the only automaker that has gone out there and taken paid deposits from potential customers years in advance of the product becoming a reality. It's such a capital-intensive business that you would not be able to raise enough money to really get the company going" without using Tesla's aggressive fundraising tactics.

Tesla used the same basic approach in selling Autopilot. The company has been taking customers' money for a product that doesn't exist yet, while making unrealistic promises about when the technology will be ready. In June 2016, Musk predicted that the world was "less than two years away from complete autonomy."

Musk predicted in October 2016 that Tesla vehicles would be capable of full autonomy from Los Angeles to New York by the end of 2017.

But nearly two years later, there's little sign that Tesla is closer to full autonomy. Tesla began advertising (and accepting money for) the "full self-driving" feature in the fall of 2016. According to the Wall Street Journal, the move blindsided Sterling Anderson, then the head of Tesla's self-driving project. "This was Elon’s decision," he reportedly said when an employee asked about the move in late 2016. Anderson resigned from the project a couple of months later. Anderson's successor, Chris Lattner, only lasted six months in the position, and the company lost several other key managers and engineers last year.

While there are obvious similarities between pre-selling cars and pre-selling Autopilot, there's also an important difference.

When Tesla begins accepting pre-orders for a new car, it typically has a working prototype and a clear idea of the feature set. There's always uncertainty about how long it would take the company to ramp up production of the car, and the company has often been too optimistic. But there has never been too much doubt that Tesla would figure out how to produce a vehicle sooner or later.

In contrast, it's genuinely unclear how long it will take Tesla to develop a car with full self-driving capabilities—or if it's even possible to do so using the hardware Tesla has been shipping to people. This isn't just a case where the full self-driving feature might arrive a year or two late. There's a real question about whether Tesla can deliver the technology at all.

But at this point, Tesla has a strong incentive not to rethink its Autopilot strategy—at least not publicly. If the company were to admit that it made a mistake—for example, that it will need lidar to reach full autonomy—then it would have to deal with a lot of angry customers who paid $3,000 apiece for the full self-driving package. But as long as the company is continuing to work on the technology, however slowly, it can kick that can further down the road.

Tesla did not respond to multiple emails seeking comment for this story.

« Go back