Automotive

Everyone Needs To Stop Assuming Autonomous Vehicles Are Going To Be Safer Than Humans


Articles about how autonomous vehicles are almost here and how big a deal it’ll be aren’t exactly uncommon, but this one from a couple of days ago over at Interesting Engineering struck me in particular because it rehashes an idea that’s at the root of so many articles and self-righteous online arguments from Tesla owners about autonomous cars: that autonomous vehicles will unquestionably be safer than human drivers. I think there’s a bit of a logical fallacy there, and there’s no reason we have to keep perpetuating this idea.

The Interesting Engineering article is called Will Our Children Ever Learn To Drive and is, in turn, inspired by a Motor Trend story about a robotics expert who predicts that kids born today won’t ever drive a car, as our “autonomous future is only 10, 15 years out.”

Advertisement

Now, there’s plenty to unpack and discuss right there, as I personally think those timelines are wildly optimistic and don’t factor in any number of realities of driving and the world that could affect whether or not your kid chooses to learn to drive, but I want to focus more on one particular part of the IE article.

This part:

Would having no human drivers be better?

Our children may then have a choice to drive a car, but they will probably never have to drive a car if they don’t want to. As a parent myself, I personally hope my children never have to drive a car for one main reason – safety. The crash rate for young drivers (16-19) is 2.7 times higher than for any other age group, according to the California DMV. In general, people are not great at driving, but teens are especially not great at driving.94% of all motor vehicle accidents are because of human error. That means that 94% of all accidents could hypothetically be stopped with the implementation of autonomous vehicles (ideally).

Having a computer drive your car will be much safer than driving a car yourself. That’s not an opinion, but a fact.

That last line there sums up the issue: the idea that a computer driving a car is always safer and always will be safer than a human driving. The author here even boldly states that “that’s not an opinion, but a fact,” even though it is in no possible way a fact. I’m not even sure it counts as an opinion, because we simply do not have enough evidence to prove it one way or the other.

Let’s look at the paragraph above that boldly inane—but incredibly common—statement. I’m absolutely willing to accept that 94% of accidents are caused by human error, but the leap from that idea to

“That means that 94% of all accidents could hypothetically be stopped with the implementation of autonomous vehicles (ideally).”

Advertisement

…is a huge problem, and is tainting the whole popular perception of autonomous vehicles.

Even with those (ideally) and “hypothetically” qualifiers, this statement is a complete load of shit. It’s only true if we assume that the computers driving autonomous vehicles will make no mistakes at all, ever, and there is no way that will be the case.

Advertisement


This idea, though, is old and pervasive. The “computers never make mistakes” conceit is a strangely powerful holdover from the early days of computing, when computers were remote, massive gods kept alive by white-coated acolytes in specially air-conditioned rooms, and who jealously and stingily doled out computer time and solemnly gifted you with stacks of punched cards containing the Holy Calculated Results.

Advertisement

The fundamental belief is still around, even though you’d think, with the near-constant access we all have to computing devices, we’d have realized that computer systems can be as flawed and inefficient as any human.

Let’s face it: computers, for all their speed and power and capabilities, are fundamentally idiots. Don’t forget that we fool them on a daily basis with squiggly letters or finding pictures of stoplights or crosswalks (when you do that, by the way, you’re helping to train future AVs).

Advertisement

Even the term “artificial intelligence” is misleading, because what those computers are doing is in no way like what we understand as “intelligence.” AI is maybe “simulated intelligence” at best, because what it’s really doing is using all kinds of algorithmic and brute-force methods to achieve results similar to what we’d expect out of a being with actual intelligence, even though the actual processes are absolutely nothing alike.

What I’m getting at here is that this fundamental assumption that computer-driven cars will be safer than human drivers is by no means something we should assume.

Advertisement

Sure, there’s plenty of issues an AV won’t have to deal with that humans do: being distracted by your phone or having to pee, road rage or uncontrolled horniness or fatigue or hunger or any number of other biological urges and issues that can afflict any of us.

But AVs will have their own issues, because they’re stupid machines that can only react and branch off decision trees based on how they’re programmed, and the chaos and uncertainty of the real world can throw any number of baffling situations to them that a human wouldn’t even worry about for a moment.

Advertisement

Dirty sensors, odd reflected light, confusing billboards, clouds of smoke or dust, unpredictable animal or human behaviors, a bunch of paper blowing in the wind, any of these things can completely lock up even the most advanced AV humans have built.

Sure, hypothetically, and assuming a whole entire cooperative infrastructure, both of physical materials and of wireless data transmissions, an AV can likely end up proving to be, on the whole, safer than human drivers.

Advertisement

But we’re a long, long way from that point, and even then I suspect some actual human brains may need to be sprinkled into the mix of technologies somewhere, to give some good old biological common sense to machine conundrums.

I’m no luddite, but I’ve also spent far too much time around computers in my life to just blindly bow down to their superiority. The more complex computer systems become, the more points of failure can be introduced, and the less predictable those points become.

Advertisement

I do believe that we will eventually develop nearly fully-autonomous vehicles that can perform almost all the driving tasks we need—but I don’t for a minute think they will be universally infallible, nor do I think they will entirely eclipse the need for human-driven vehicles. And I don’t think that’s a problem.

The problem is the nearly religious faith so many AV pundits seem to have in the abilities of the computers that will drive their cars. Have these people never actually used computers before? Tried to get something to print? Sometimes everything works great, and sometimes it’s a shitshow. Just like people.

Advertisement


At this point in the game, we need to look at replacing human drivers with computers with swapping out one set of problems with another, different set. It may prove to be that the set of problems AVs have can be engineered out, or that we, as humans, are lazy enough or bored with driving enough that we choose to focus on fixing the machine problems instead of the human ones, and that’s fine.

Advertisement

Every technological advance has its own compromises, and if we choose to develop AVs that can fuck up in their own way as opposed to how human drivers fuck up, then, great, have at it.

But let’s knock it off with the still completelyunproven notion that an all-AV future will be inherently safer. Maybe it will.

Advertisement

But, for right now, I’ll happily go all John Henry and put myself head-to-head against any AV vehicle that’s out now or will come out this year. Autonomous vehicle builders and fetishists, you know where to find me.

(Am I going to plug my book about this stuff now? I am.)

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top