Automotive

The Deadly Recklessness of the Self-Driving Car Industry

Autonomous vehicles were supposed to make driving safer, and they may yet—some of the more optimistic research indicates self-driving cars could save tens of thousands of lives a year in the U.S. alone. But so far, a recklessness has defined the culture of the largest companies pursuing the technology—Uber, Google, and arguably even Tesla—and has led directly to unnecessary crashes, injury, even death.

Let’s be clear about this, because it seems to me that these companies have gotten a bit of a pass for undertaking a difficult, potentially ‘revolutionary’ technology, and because blame can appear nebulous in car crashes, or in these cases can be directed toward the humans who were supposed to be watching the road. These companies’ actions (or sometimes, lack of action) have led to loss of life and limb. In the process, they have darkened the outlook for the field in general, sapping public trust in self-driving cars and delaying the rollout of what many hope will be a life-saving technology.

The litany of failures of the most visible companies pursuing self-driving technologies underlines the fact that the safety of autonomous systems are only as good as the people and organizations building them.

  • According to an email recently obtained by the Information, Uber’s self-driving car division may not only be reckless, but outright negligent. The company’s executive staff reportedly ignored detailed calls from its own safety team and continued unsafe practices and a pedestrian died. Before that, a host of accidents and near-misses had gone unheeded.
  • At least one major executive in Google’s autonomous car division reportedly exempted himself from test program protocol, directly caused a serious crash, injured his passenger, and never informed police that it was caused by a self-driving car. Waymo, now a subsidiary of Google, has been involved, by my count, in 21 reported crashes this year, according to California DMV records, though it was at fault in just a small fraction.
  • On two separate occasions, Autopilot, Tesla’s semi-autonomous driving system, was engaged when drivers suffered fatal car crashes. In October, a Florida Tesla owner sued the company after his car was in a serious crash while on Autopilot, claiming the company “has duped consumers … into believing that the autopilot system it offers with Tesla vehicles at additional cost can safely transport passengers at highway speeds with minimal input and oversight from those passengers.” (Tesla, of course, denies this characterization.) These cases are muddier because Tesla explicitly warns not to let the system drive the car entirely and has safeguards installed to deter bad driver behavior. Yet Tesla continues to advertise that it offers “Full Self-Driving Hardware on All Cars” on its website, and its own engineers told regulators that they anticipated some drivers would rely fully on the system. Yet publicly, Tesla continues to deny that their system might engender in drivers any dangerous reliance on its semi-autonomous system.

No wonder public the public is wary of self-driving cars.


“At the moment, testing in the U.S. is pretty reckless,” says Dr. Jack Stilgoe, a senior lecturer in the science and technology department at University College London, and the principal investigator in the forthcoming Driverless Futures project. “It is being left to companies to decide what risks are acceptable.” Often, these companies have clearly decided that very high risks are acceptable.

Advertisement

The newest and most glaring example of just how reckless corporations in the autonomous vehicle space can be involves the now-infamous fatal crash in Tempe, Arizona, where one of Uber’s cars struck and killed a 49-year-old pedestrian. The Information obtained an email reportedly sent by Robbie Miller, a former manager in the testing-operations group, to seven Uber executives, including the head of the company’s autonomous vehicle unit, warning that the software powering the taxis was faulty and that the backup drivers weren’t adequately trained.

“The cars are routinely in accidents resulting in damage,” Miller wrote. “This is usually the result of poor behavior of the operator or the AV technology. A car was damaged nearly every other day in February. We shouldn’t be hitting things every 15,000 miles. Repeated infractions for poor driving rarely results in termination. Several of the drivers appear to not have been properly vetted or trained.”

That’s nuts. Hundreds of self-driving cars were on the road at the time, in San Francisco, Pittsburgh, Santa Fe, and elsewhere. The AV technology was demonstrably faulty, the backup drivers weren’t staying alert, and despite repeated incidents—some clearly dangerous—nothing was being addressed. Five days after the date of Miller’s email, a Volvo using Uber’s self-driving software struck Elaine Herzberg while she was slowly crossing the street with her bicycle and killed her. The driver was apparently streaming The Voice on Hulu at the time of the accident.

This tragedy was not a freak malfunction of some cutting-edge technology—it is the entirely predictable byproduct of corporate malfeasance.

If Uber is the worst actor in this case, it is not the only bad one—and it was building on a culture established years before, where a need to be first to the technology eclipsed safety concerns.

Anthony Levandowski, the former lead of Google’s self-driving car project, was notoriously brash, careless, and egregiously reckless. It’s probably fair to say we don’t even know how many crashes the self-driving cars he oversaw were involved in. Here’s a key example, as reported by Charles Duhigg in the New Yorker:

Advertisement

“One day in 2011, a Google executive named Isaac Taylor learned that, while he was on paternity leave, Levandowski had modified the cars’ software so that he could take them on otherwise forbidden routes. A Google executive recalls witnessing Taylor and Levandowski shouting at each other. Levandowski told Taylor that the only way to show him why his approach was necessary was to take a ride together. The men, both still furious, jumped into a self-driving Prius and headed off.

“The car went onto a freeway, where it travelled past an on-ramp. According to people with knowledge of events that day, the Prius accidentally boxed in another vehicle, a Camry. A human driver could easily have handled the situation by slowing down and letting the Camry merge into traffic, but Google’s software wasn’t prepared for this scenario. The cars continued speeding down the freeway side by side. The Camry’s driver jerked his car onto the right shoulder. Then, apparently trying to avoid a guardrail, he veered to the left; the Camry pinwheeled across the freeway and into the median. Levandowski, who was acting as the safety driver, swerved hard to avoid colliding with the Camry, causing Taylor to injure his spine so severely that he eventually required multiple surgeries.

“The Prius regained control and turned a corner on the freeway, leaving the Camry behind. Levandowski and Taylor didn’t know how badly damaged the Camry was. They didn’t go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident.”

Levandowski’s penchant for putting AV testing before safety—which is now quite well documented—and the fact that the police were not informed are the key parts here. Google for a long time appeared to be candid about its autonomous vehicle program; so much so that Wired reported that a Google car caused its “first crash” in 2016, five years after the Levandowski incident.

This, combined with the Uber revelations, should make us think deeply about what level of trust and transparency we should demand from the companies doing autonomous vehicle testing on our shared roadways.

In December 2016, however, Google split its autonomous car division into Waymo, where no similarly serious crash events have been reported. “Anthony Levandowki’s disregard for safety does not reflect the mission and values we have at Waymo where hundreds of engineers on our team work each day to bring this technology safely to our road,” a spokesperson told me in an email. “Our company was founded to improve road safety, and so we hold ourselves to a high safety standard.” According to California’s autonomous vehicle crash log, Waymo has been involved in those 21 minor accidents this year, and was only at fault in a couple. That is a pretty good track record, assuming all accidents were accurately reported, and Waymo’s emphasis on safety appears to point the way forward for the industry if it hopes to regain the public’s trust.

For its part, Uber took its autonomous vehicles off the roads and says it has overhauled its self-driving car testing procedures. Though when I asked an Uber spokesperson how it has changed its corporate policies since the accident, she sent me the same exact boilerplate response that was sent to the Information (and my colleague Jennings Brown): “Right now the entire team is focused on safely and responsibly returning to the road in self-driving mode. We have every confidence in the work that the team is doing to get us there. Our team remains committed to implementing key safety improvements, and we intend to resume on-the-road self-driving testing only when these improvements have been implemented and we have received authorization from the Pennsylvania Department of Transportation.”

Uber refused to answer questions about whether changes had been made to company culture, whether anyone at the company had been held responsible, or anything, really, beyond that statement and publicly posted materials. Uber did not deny it was responsible for the crash.

Advertisement


In the first week of December 2018, police successfully stopped a Tesla Model S whose driver was asleep at the wheel while it was barreling down the road at 70 mph. He had Autopilot enabled and had traveled 17 miles before being pulled over. In the October lawsuit against Tesla mentioned earlier, Shawn Hudson, the Florida man whose Tesla crashed while on Autopilot, claimed he had been misled into believing the car could function autonomously. He said he bought the Tesla in part because of its Autopilot feature, which he thought would allow him to relax on his long commute, and so routinely wrote emails and checked his phone while on the road—including when the crash took place.

Tesla is in a unique position. As previously noted, it has long shipped its cars with what it describes as “full self-driving hardware” while also issuing advisories not to rely completely on Autopilot, the semi-autonomous driving system that costs an extra $5,000. It is continuously upgrading its software, edging drivers already on the road closer to self-driving actuality.

“Tesla,” Stilgoe tells me, “is turning a blind eye to their drivers’ own experiments with Autopilot. People are using Autopilot irresponsibly, and Tesla are overlooking it because they are gathering data. And Tesla is also misleading people by saying that they are selling self-driving cars. They use the phrase ‘full self-driving hardware.’”


Screenshot: Tesla.com

Indeed, Elon Musk himself was essentially advertising Tesla’s self-driving potential in a no-hands bit of derring-do on nothing less than a 60 Minutes appearance. Tesla stresses that during the buying process, its sales team demonstrates the proper use of the Autopilot, disabusing buyers of the notion that it functions anything like a self-driving car. It has added features that turn off Autopilot if users go hands-free too long. Still, marketing and the power of suggestion—not to mention the long-held geek dream of riding in self-driving cars—are potent forces, and the crashes and incidents in which Autopilot is being misused as a self-driving system are nonetheless continuing apace.

To overtly market its product this way is a conscious decision, and it likely helps instill faith in Tesla drivers that they can use the feature in the manner the marketing language seems to describe. The problem is “what happens when the driver is over-reliant on the system,” as Thatcham Research, which investigated just how good the software really is in a segment for the BBC, explained. (Spoiler alert: There are a number of scenarios in which “full self-driving” Teslas crash into objects in front of them). But that doesn’t stop the humans inside them from becoming lulled into a false sense of security.

Advertisement

Herein lies the second question with regards to Tesla’s responsibility to its drivers—its own engineers knew this was a distinct safety concern. (This is also why I think it’s a little unfair to heap all the blame on the “safety driver” who was supposed to be watching the road in the fatal Uber crash; for one thing, Uber apparently cut the safety drivers down from two to one—Miller called for reinstating two drivers—and for another, it is human nature to be lulled into a sense of security as evidence accumulates that we are safe after hours of non-events, and after, presumably, we get very, very bored.) In fact, one of Waymo’s recent crashes occurred after its lone safety driver fell asleep at the wheel and accidentally turned the system off.

In the report it released after the first fatal crash, the National Highway Traffic Safety Administration—which cleared Tesla of wrongdoing—nonetheless found that:

“[O]ver the course of researching and developing Autopilot, Tesla considered the possibility that drivers could misuse the system in a variety of ways, including those identified above – i.e., through mode confusion, distracted driving, and use of the system outside preferred environments and conditions. Included in the types of driver distraction that Tesla engineers considered are that a driver might fail to pay attention, fall asleep, or become incapactitated [sic] while using Autopilot. The potential for driver misuse was evaluated as part of Tesla’s design process and solutions were tested, validated, and incorporated into the wide release of the product. It appears that Tesla’s evaluation of driver misuse and its resulting actions addressed the unreasonable risk to safety that may be presented by such misuse.”

In other words, Tesla’s team knows that some drivers are going to use Autopilot as an actual self-driving mode or otherwise end up relying fully on the system. And while the NHTSA found that Tesla’s engineers had accounted for those elements, the system has still been in use during two fatal crashes and many more accidents. (Meanwhile, the National Transportation Safety Board found that some of the blame did in fact lie with Tesla, for not doing enough to dissuade drivers from abusing Autopilot).

After hours of phone calls with Tesla spokespeople, during which the company refused to speak on the record about the tension between promoting its vehicles with language like “full self-driving hardware on all cars” and then insisting drivers who buy Teslas not treat them as self-driving cars (and during which one of the spokespeople kindly told me it sounds like I need to think about my story a little more), the company offered only the statement it issued following the Florida lawsuit in back in October. That statement is as follows: “Tesla has always been clear that Autopilot doesn’t make the car impervious to all accidents, and Tesla goes to great lengths to provide clear instructions about what Autopilot is and is not, including by offering driver instructions when owners test drive and take delivery of their car, before drivers enable Autopilot, and every single time they use Autopilot, as well as through the Owner’s Manual and Release Notes for software updates.”

Look, driving is already a messy, dangerous business. High-functioning self-driving cars could be a godsend in terms of human health gains, and testing them in traffic is always going to be a thorny matter both logistically and ethically. In Tesla’s case, we have no way of knowing whether some of the Autopilot users would have been driving so distractedly that they would have caused accidents regardless. But the fact is, those of us already on the road in our not-so-autonomous cars have little to no say over how we coexist with self-driving ones. Over whether or not we’re sharing the streets with AVs running on shoddy software or overseen by less-than-alert human drivers because executives don’t want to lag in their quest to vacuum up road data or miss a sales opportunity.

Advertisement

Thus far, tech giants in a “race that they need to win,” as Levandowski once put it, have made those decisions for us. Short of pushing for legislation to regulate the testing of autonomous vehicles more stringently, demanding companies that make injurious and fatal transgressions be properly held accountable seems a reasonable recourse. Unless these companies dramatically improve their efforts to prioritize safety and openness over speed of development and salesmanship, they risk both inflicting further harm and alienating a public already wary of the prospect of self-driving cars.

“We still have little idea what most people would consider to be an acceptable risk, but my guess would be that members of the public would not be satisfied even if we knew that self-driving cars were a bit safer than human drivers,” Stilgoe says. “A crash, any crash, is a catastrophe. The self-driving car industry has got to work a lot harder to win the argument on safety.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top