Automotive

Why A Tesla Using FSD Running A Stop Sign Isn’t Necessarily Terrible


Image for article titled Why A Tesla Using FSD Running A Stop Sign Isn't Necessarily Terrible

Illustration: Jason Torchinsky

Videos showing people using the latest version of Tesla’s still misleadingly-named Full Self Driving (FSD) Beta v10 have been showing up on the internet lately, and as seems to be the case with every release, there’s a pretty varied mix of impressively competent, even mundane drives, and alarmingly bad ones where the car seems confused and hesitant and disengages constantly.

Advertisement

Here, to be fair, I’ll give examples of each. Here’s one where things go pretty well, in San Francisco:

There’s a lot to be impressed by there; it’s not perfect, but it’s doing some complex and impressive things, even seemingly deciding to take some initiative on a right-turn-on-red, which is a pretty sophisticated move.

On the other side of the spectrum is this unedited video of a drive in excellent weather and visibility conditions, in a city with, it appears, less traffic than what was seen in the San Francisco video above, and yet FSD handled driving here with all of the aplomb, skill, and confidence of a ferret that’s just been handed an iPad and asked to look up something on Wikipedia.

It’s not good. Even if we average out the performance of FSD between these two videos, the end result is not something that is remotely close to anything like “full self driving,” no matter what its name claims.

G/O Media may get a commission

Anker Wireless Charging Station
Editor’s choice
Anker Wireless Charging Station

• Charge your phone and watch simultaneously

• Wide compatibility with phones and watches

• Case friendly

It’s impressive in many ways, and a fascinating work in progress, but it’s not done, and here it is, deployed on public roads via 4,500 pounds of mobile computing hardware, surrounded by people who definitely did not opt into this testing.

Setting ethics and repurcussions of public testing aside for the moment, seeing the results of these tests is interesting. One example that caught my attention was in this tweeted video of a short nighttime FSD drive:

Advertisement

What I find interesting here happens about nine seconds into the video, when the Tesla approaches a stop sign, and proceeds to roll through it at about four mph.

While we’ve very likely all done this exact same maneuver many, many times, especially on similarly empty nighttime streets, it is technically illegal. That Tesla ran a stop sign. It’s a robot, not a people, and as such shouldn’t be susceptible to the base, reprobate urges that push us into minor crimes, like stop-sign-roll-through or sneakily and messily devouring the contents of a Pringles can while hiding your head in the milk fridge at a grocery store.

Advertisement

But it did commit that minor infraction, and while many are calling this out as a problem, I’m not so sure I think this sort of action from an automated driving system is such a bad thing, because the world is very complicated.

I don’t know if there’s any algorithm in Tesla’s FSD stack that has some sort of IF-THEN conditional that takes into account IF nighttime AND IF no traffic AND IF stop sign THEN roll through at low speed, but if that did exist, I don’t think I’d necessarily think that would be a problem.

Advertisement

I say this because human driving is complex and nuanced, and there are times when following the letter of the law is not the best choice for a given situation.

For example, there are the traffic laws that are written on the books, and then there are the traffic laws as they are actually practiced. I covered this a good bit in my book, so I’ll just excerpt that here instead of re-writing it:

Making things even more difficult is the fact that these unwritten rules are extremely regional, and every major metropolis seems to have its own dialect of driving and its own set of unwritten rules. In Los Angeles, for example, there is an extremely consistent and rigid unwritten rule about turning left at an unprotected traffic light. That is, a left turn at an intersection with no provision for a green arrow traffic signal.

The Los Angeles rule is that when the light goes from green to yellow to red, up to three cars waiting to turn may turn on the red light. I lived in Los Angeles for over 17 years, and this rule was one of the most consistent things in my life there. Every Los Angeleno seemed to know about the three-cars-on-a-red rule, and when I described it to anyone else in the country, they looked at me like I was an idiot. And not the usual kind of idiot I’m assumed to be; a dangerous idiot.

Should a robotic vehicle follow this unwritten LA traffic rule? It’s technically illegal, but in practice it’s the norm, and not acknowledging the rule could potentially create more issues than just going with it would. I know if I was in the middle of an intersection when the light went red and some stupid robe-car in front of me refused to make the turn, it’d drive me batshit. I don’t think I’m the only one.

Advertisement

Ignoring the three-cars-on-a-red rule in LA would make human drivers hate automated cars, and would cause more traffic problems. Same goes for cars in big cities with lots of pedestrian traffic, like New York or Mexico City, for example, where drivers often have to edge slowly into busy crosswalks just to be able to demonstrate an intent to actually move; a totally stationary car will be stuck there forever, as the masses of traffic-jaded pedestrians will just keep walking past.

Pushing into the crosswalk while there are people walking there is technically not exactly legal; and yet it’s a crucial part of the dance that keeps traffic flowing.

Advertisement

Once you start thinking about this, there are so many examples: crossing a double yellow to give room to a broken-down car or a cyclist on a narrow road, avoiding an obstacle by driving on the shoulder or into a bike or bus lane, speeding up on a yellow instead of slowing to avoid a hard braking situation at a stoplight, and so on.

None of those are necessarily ideal and all are technically illegal to some degree, but the results those actions provide are better than the outcomes of attempting to follow the law to the letter.

Advertisement

Really, the ability to understand when rule-breaking makes sense is a good example of the top-down reasoning vs. bottom-up reasoning that makes the problems of self-driving so difficult.

Absurdly simplified, this concept notes the difference between how humans drive (top-down, meaning we start with an overall understanding of the entire context of the environment we drive in) and bottom-up, like a computer, which reacts to sensor input and rules without really understanding the overall situation.

Advertisement

This is one of the hardest obstacles for self-driving car tech to overcome. It’s not just about sensors and neural networks and powerful computers—we have to figure out ways to synthesize our cultural knowledge surrounding driving, which is a big deal.

The good news is that I think there are companies actively thinking about this. I recently met with some people from Argo AI, the ones using a disguised pre-production Volkswagen ID Buzz for their testing. I’ll have a bigger article on them soon, but for the moment, here’s a teaser pic:


Image for article titled Why A Tesla Using FSD Running A Stop Sign Isn't Necessarily Terrible

Photo: Jason Torchinsky

Advertisement

Their approach to automated driving is quite different than Tesla’s, which, again, I’ll get into soon, but the key thing that came up in our conversation that encouraged me that at least these harder-to-define issues are being considered was one word: Halloween.

The Argo engineers understood that there are times when, for reasons that have absolutely nothing to do with driving, all the rules change. During Halloween, kids won’t necessarily look like kids, and they’ll be moving all over the roads, at night, in patterns and paths that do not happen any other time of year.

Advertisement

Whatever an AI thinks it understands about pedestrian behavior does not apply during the candy-fueled madness of Halloween. And the engineer I spoke with understood this, and this was considered to be a valid problem that needed some sort of solution.

Would they special-case October 31st and have the car operate under an entirely different set of rules? Would the speed limit be much more severe on this night? Would the lidar or camera setups operate differently?

Advertisement

I don’t know, but I do know that Halloween is just one of many, many bits of glorious chaos that makes human life so wonderful, and such a hell for machines to understand.

But it’s our job to make these machines understand, even if that means, sometimes, breaking some of the normal rules of the road.

Advertisement

None of this is easy, and it’s good to remember that.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top