For all its focus on AI algorithms, sharp-eyed cameras, and pedal-less pods, the arrival of driverless cars continues to revolve around philosophical questions. In the event of an unavoidable collision, should a self-driving car barrel into a group of pedestrians or kill its occupant? Is self-driving truck technology that puts millions of people out of work actually good for society? If self-driving cars are already safer than human drivers, should they be on the roads, even if some people will inevitably die in them?
Tesla CEO Elon Musk believes the answer to the last question is an emphatic “yes.” On Thursday, he received some level of vindication from the National Highway Traffic Safety Administration, which found his company’s Autopilot feature wasn’t at fault in a fatal collision between a tractor trailer and a Tesla vehicle using the semi-autonomous driving mode in May. The government agency could have issued a recall on Autopilot-enabled Teslas and cratered consumer confidence in self-driving technology. Instead, the NHTSA said it discovered no defects in Tesla’s Autopilot system. In fact, the agency found that Teslas equipped with the Autopilot feature Autosteer (used to automatically keep a vehicle within highway lane markings) had a crash rate 40 percent lower than those without it.
Musk was quick to tweet out the new data point as a “report highlight.” The man who is simultaneously trying to build self-driving cars, fight the fossil-fuel industry, convince Americans to embrace renewable energy, and, oh yeah, go to Mars, is relentlessly empirical. He has very little patience for media narratives that get in way of his fact-based worldview.
So it’s no shock that the critical press coverage of Joshua Brown’s death, the first involving a Tesla in Autopilot mode, made him palpably angry. “Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he said in an email to a Fortune writer in July. Going further with this logic, he said to reporters at an October press conference, that “if, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people.” A Tesla blog post at the peak of the Autopilot controversy in July called Brown’s death a “statistical inevitability.”
These are arguments based in reason (though calling a semi-autonomous driving system “Autopilot” may also be dissuading people from paying attention to the road). But reason is only one factor at play in the adoption of driverless cars. Even if human driving is a statistically unsafe activity, few individual people feel they are unsafe driving. Ceding human agency to an unknowably complex machine sounds, on its face, way more dangerous. A recent survey by Kelley Blue Book found that a majority of drivers prefer to have full control of their vehicle, even if it’s less safe for other people on the road. Musk has to clear psychological hurdles as well as technological ones.
Through their aggressive rollout of driverless car tech, Tesla and companies like Uber (which sped its driverless cars into San Francisco briefly before being kicked out over safety concerns) are betting that nothing awful will happen on their technology’s watch — or that, if it does, statistics will show it really isn’t that awful so long as you consider the statistics. That’s a big gamble for an unproven technology in a culture increasingly driven by fear of the unknown. The NHTSA report offers validation for Musk’s logic, but it feels more like a disaster averted than a case fundamentally won. Proving that driverless cars are safer than human drivers is the work of a researcher. Convincing real people that they will feel safer in a driverless car than behind the wheel themselves is the work of a business. And Musk and his competitors are still working on proving that business is viable.