Automotive

Here’s One Way To Fix This Whole Semi-Autonomous Driving Thing


Illustration for article titled Here's One Way To Fix This Whole Semi-Autonomous Driving Thing

Recently, I was driving along Interstate 90 in central New York in a Nissan Altima equipped with the company’s driver assist technology, ProPilot. This section of I-90 was your typical long and straight rural interstate. The lanes were well-marked, the road was smoothly paved, traffic was light, and skies were clear. In other words, it was a good day to let the computer drive. Or help me drive, as it were.

I was about 45 minutes into a three-and-a-half-hour drive. At this point, I had logged about 500 miles in the Altima over a few days and already determined I didn’t much care for ProPilot’s steering assist feature, which basically keeps the car in the lane on the highway as long as you keep your hands on the wheel. I didn’t care for it not because it felt unsafe, but because the wheel could not consistently detect my hands, a necessary condition for the steering assist to activate.

Plus, performing an act amounting to holding the car’s hand felt awkward and, frankly, silly to me. I much preferred to steer myself, which not only felt more comfortable but also kept me mentally engaged on the happenings of the road.

At this point in the drive, the podcast I was listening to had just ended. I needed to select a new episode, which required some Android Auto navigation. I thought about my options: futz around with the center console while steering as drivers have done since the advent of radio (which is, let’s face it, not especially safe; drivers tend to inadvertently apply torque to the wheel when leaning to tap the console), pulling over (the responsible option nobody actually does), or engaging ProPilot steering assist (and violating the manufacturer rules of maintaining consistent contact with the wheel and monitoring the road).

Advertisement

I found myself puzzled by this scenario. Here I was, driving down a stretch of highway this car can absolutely handle on its own, and yet there was no way to let the robot fully take the wheel, even just for a brief period, a choice which would almost surely be the safest and most practical. It made me wonder if we’re approaching this period of “semi-autonomous” driving all wrong.

Over the past few weeks, I’ve used three of these so-called “semi-autonomous” systems: Nissan’s ProPilot, Kia’s DriveWise, and Cadillac’s SuperCruise, logging over 1,500 combined miles. (Full disclosure: I was able to test these systems because Kia, Nissan, and Cadillac let us borrow a Niro EV, Altima, and CT5, to test out, along with a full tank of gas in each, or in Kia’s case, a full charge on the Niro.)

Advertisement

While I have my nitpicks with each system—mainly about how they handle the driver monitoring part—my biggest overarching issue was how they’re designed to be used. I wished these systems were designed not to share the driving task for long periods of time, but to take it over completely for very short periods when the system has high confidence it is interpreting its environment properly, like in the I-90 scenario I described above.

This would not only be more useful to drivers overall, but mitigate the worst parts of these systems: the clumsy and often ineffective way they make sure the driver is paying attention to the road. It would also be a better way to introduce drivers to the concept of self-driving cars, something we’re told is constantly right around the corner, a proclamation out of step with the fact that car companies appear to not yet trust their own systems to drive in even the simplest circumstances.

Advertisement


We’re living in a weird time for autonomous cars. On the one hand, cars can most definitely not drive themselves. Even so-called “self-driving cars,” themselves largely development prototypes at this point, still need a human to sit there just in case the car screws up.

Advertisement

At the same time, an increasing number of cars can, in practice, drive themselves pretty dang well on highways, particularly in highly-predictable environments when there’s no traffic or construction. Tesla is notorious for doing its absolute best to blur the lines here, constantly misleading the public about what its cars can actually do, resulting in Autopilot’s rampantmisuse.

Tesla’s approach contrasts sharply with every other automaker’s risk-averse approach, but that doesn’t change the fact that Autopilot, like most big car companies’ semi-autonomous systems, functions just fine on divided highways in normal traffic with clear lane markings.

Advertisement

But the overly cautious minding-the-computer approach to semi-AV driving results in a worse outcome for everyone. Car companies are selling a technology that is, frankly, annoying to use with no clear benefit to the driver (Cadillac’s SuperCruise is the best of the bunch because its driver monitoring system is based on a camera mounted on the dash that makes sure the driver is watching the road instead of contact with the steering wheel; but even that requires the driver to be looking at the road at all times which only induces severe boredom). It may offer a minor safety benefit, but car companies can’t say that because they don’t have any good data to back it up. Meanwhile, the human-monitoring-the-computer-driving-the-car dynamic invites all sorts of potential safety concerns.

Advertisement

The result, for now, is that drivers and cars essentially share responsibility over the act of driving. This, for me, is the worst of both worlds. The computer is kind of driving, but not really, and I’m still responsible for what the car is doing, but not fully in control. And no one really gets to enjoy the benefits of a computer being able to drive the car.

I of course understand why car companies do this: liability. If automakers gave drivers permission to fiddle with the radio or text for even a minute and a deer dashes in front of your car or some other unforeseeable incident occurs, that’s a lawsuit. There is no reason, legally speaking, for car companies to accept that risk when they can keep doing what they’ve been doing for more than a century, which is to blame drivers for any and all crashes except for cases of manufacture defects.

Advertisement

I certainly didn’t come away from using these semi-automated systems thinking they’re ready to drive humans around all the time, but I did think we’re getting the implementation of their limited use case wrong. Instead of kind-of-but-not-really driving the car most of the time, they should be completely driving the car only a tiny fraction of the time. Say, 15 seconds.

Yes, this will require the systems to be totally foolproof for that short amount of time or be liable for what occurs during that time. If car companies don’t believe their technology is capable of that, then it shouldn’t be driving.

Advertisement

In my mind, it works something like this: each system has an algorithm to evaluate its confidence in its ability to interpret the environment. In situations like the I-90 case above, that confidence will be very high and the semi-AV systems can engage for about a minute or two at a time (or pass control back to the driver if it detects a sudden drop in confidence, which they do now). This gives drivers the ability to do all the little tasks we do anyways during the safest parts of highway driving, like choosing new music, eating, drinking, getting something for the kid, or (yes, unfortunately) texting, in a much safer way. After the timer expires, perhaps the driver would have to wait another minute or two before re-engaging the system.

In another example I encountered, where construction crews laid down cones for about 10 miles for road paving shifting the lanes about four feet, ProPilot understandably couldn’t interpret the environment and determine where the lanes were (this was, however, extremely annoying to manage with the “lane keeping” assist feature). In these scenarios, the system would not be able to engage, much as it refuses to engage during certain unsafe circumstances now, preventing driver abuse.

Advertisement

Shifting to the short-term takeover approach would not only be better for drivers—wouldn’t it be nice to eat a candy bar, open the soda bottle, or pick a new song without driving with your knees?—but it would force automakers to truly evaluate in a rigorous way if their autonomous technology is ready for public use rather than enlisting all of us to be their safety drivers and data gatherers. It would spur regulators to come up with actual rules around its deployment and use, something they have largely passed the buck on to date.

In other words, it could be a more logical middle ground between our manual past and (widely predicted) self-driving future. Because what we’re in now, what these fancy new technologies offer, for all their technological accomplishments, are being wasted.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top