The problems with self-driving cars

Self-driving cars, like the AI technology they are based on, seem always to be just tantalizingly beyond our reach at any given time. I have been hoping that self-driving cars become a reality because I am getting on in years and there is bound to come a time when it will not be safe for me to operate a vehicle, even though I have been accident and ticket-free my entire driving career, except for one fender-bender and one minor infraction, both of which took place over three decades ago when I was young, wild, and foolish. (No, not really. Both were rather boring events.)

The loss of driving privileges can result in a deep drop in a person’s independence, especially in the US which has pretty bad public transportation services. Having a self-driving car would provide older people or those with any issue that prevents them from driving, from being housebound. Of course, these cars are initially likely to be expensive but over time the prices should come down. The catch is that even though these cars have improved tremendously, they still seem to be not ready for prime time.

Elon Musk, the founder of Tesla, has become, as a result of his relentless self-promotion, the face of self-driving cars. Interestingly, in the current race for the US senate seat in California, there is one wealthy candidate Dan O’Dowd who is self-funding his campaign and whose entire platform has been to argue that the self-driving AI technology that Tesla uses is terrible and dangerous to the public.

O’Dowd, a software entrepreneur with a 40-year history of working on military, aerospace and other commercial contracts, is running, rather, out of frustration at his fellow tech entrepreneur, Elon Musk, whom he accuses of endangering road safety with a driver assistance software package he’s put in his Tesla electric cars.

Here is a campaign ad that he has put out showing all the failures of the Tesla self-driving system

O’Dowd’s career has been in creating technology that he claims seeks to never fail and cannot be hacked so he is knowledgeable about the issue. He thinks that the government should be much more demanding of this technology.

But O’Dowd is also unapologetic about being a single-issue candidate. His mission, he says, is to ensure that government regulators become much tougher with the “move fast and break things” ethos that has inspired Musk and many other tech pioneers over the past two decades. He’s spent $650,000 on advertising so far and seems poised to spend a lot more over the next six weeks.

And, in his mind, it’s not just about Musk. O’Dowd believes that the problems he’s documented with Tesla’s “full self-driving” software package – problems that, according to publicly available video footage, have caused vehicles to veer unexpectedly into the wrong lane, turn the wrong way, crash into poles and endanger other road users – are emblematic of a broader and increasingly serious problem.

O’Dowd insisted that his campaign had nothing to do with commercial self-interest. Rather, he said, he found Tesla’s “full self-driving” software package more alarming than anything else in commercial use because it was, in his words, “amazingly terrible” – a car guidance system that, according to his analysis, goes wrong every eight minutes, whereas similarly experimental guidance systems run by competitors including the Google subsidiary Waymo typically go tens of thousands of miles before encountering problems. O’Dowd was similarly dismissive of the notion, promoted by Tesla, that such problems can be fixed by patching the software with online upgrades.

His analysis is far from a consensus position in the industry. Many experts say that everyone is struggling to crack the problem of producing a reliable self-driving car and that the problem of cybersecurity – making sure a bad actor cannot gain control of a fleet of tens of thousands of cars through their operating software – is a particularly vexing one across the board.

Some have argued that picking on Tesla is a little unfair since all the companies are struggling with similar issues.

It seems like until there is much improvement in the technology, fully self-driving cars will not be an option for ordinary consumers. What might happen in the interim is some kind of half-way measure where the cars will mostly drive on their own but must have a human at the wheel to override the system if needed. This may require a new category of driver’s license. At present, licenses note if you have special needs to operate the vehicle, like wearing glasses. Perhaps there could be a category of license that allows people who are not able to fully operate a vehicle to only ‘drive’ self-driving cars that have such a manual override.

There are not many restrictions on driving, at least in the US. Once you get your license, getting it renewed is very easy with no further tests required, other than a vision test. I learned to drive using standard transmission cars (those were the only ones in Sri Lanka at the time) that had three pedals, clutch, brake, and accelerator. The left leg operated the clutch and the right leg was for the brake and accelerator. As a result of my polio, my left leg was weaker than my right but it was not a problem when it came to operating the clutch. After I came to the US, I have driven only automatic transmission cars and so the left leg has become unnecessary for driving, which is a good thing because with age, that leg has become weaker. I would never drive a standard transmission car now since I do not think it would be safe. But my driver’s license has no restriction saying that I should only drive automatic transmission cars.

Having restrictions that take into account a driver’s limitations seems reasonable. Having similar restrictions that take into account a car’s limitations seems equally reasonable.


  1. Reginald Selkirk says

    Some have argued that picking on Tesla is a little unfair since all the companies are struggling with similar issues.

    As noted in quoted section immediately above this, Tesla’s self-driving software performs measurably worse. There are reasons for this. Tesla’s philosophy seems to have been, “what can we do with what we have? We have an electric car with automatic controls, a computer, and some cameras.” Other companies seem to have approached differently, more like “what would it take to accomplish this task?” For example, Tesla stands out for not using LIDAR, nor does it have any substitute for it such as stereo cameras. Thus they have the added burden of trying to assess distance of viewed objects from monocular video.

  2. Deepak Shetty says

    I am a terrible driver (but still just one ticket and no accidents) and I was waiting for the Star Trek teleporters. But I suppose a self driving car would do too. By all accounts Tesla’s autonomous driving is poorer than its rivals and Musks mouth ensures that they will get more criticism.

  3. Jazzlet says

    I don’t doubt that the Tesla system isn’t good, but this annoys me

    – a car guidance system that, according to his analysis, goes wrong every eight minutes, whereas similarly experimental guidance systems run by competitors including the Google subsidiary Waymo typically go tens of thousands of miles before encountering problems.

    because I can not judge for myself as the writer doesn’t use the same units.

  4. mnb0 says

    “the US which has pretty bad public transportation services”
    Another reason for me to avoid that country. I never had a driver’s license.

  5. seachange says

    I am anti-car.

    If you read Marcus, you know he has a deeply informed sour opinion about people who claim that they make software that does not fail and can’t be hacked. My priors on this guy O’Dowd is that he is wrong.

    Right now the number of deaths by car is as great as the number of deaths by gun. I’ve been seeing stats quoted in recent articles that the number is nearly the same, but the last time I looked ten years ago the number of people who are actually murdered by car is the same as those murdered by gun and in excess of this are twice as many deaths by accident. Death by accidents got dropped to zero, or murder by car dropped to zero or both in combination dropped by that much? I don’t believe it.

    The amount of “accidents” and injuries from this are much greater. Now me, who does not drive a car and hasn’t needed one for a long time now, I do not buy that cars are somehow necessary or more necessary than guns.

    Here in Los Angeles County you could if you wanted to add up the cost of all the cars, add the cost of maintenance, add the cost of insurance, and add the cost of fuel. Then multiply this by the average number of cars owned per resident. The total sum would get a bus on every corner every five minutes. There are alread local and state sales tax increases that also result in many cities here in Southern California to run their own shuttles in addition to the county-wide bus lines. If you are a little patient you can take the West Hollywood Cityline to within two block of everywhere in the city.

    I also don’t buy that the deaths and injuries “by accident” are in any way acceptable no matter how deeply think that they need need need their car. If it is preventable by making people take better care of their cars or prove more than once that they are safe drivers like in the UK, then it should be done, just like guns should be *well-regulated*. Just because it isn’t done doesn’t mean it shouldn’t and there is no high-tech required.

    Eighty percent of drivers think they are better than average at driving. This means more than thirty percent of drivers are wrong, because there might be some average or above average drivers who get run into by the overly optimistic risk-taking dumbfucks. A Tesla would have to be very bad to be worse than humans. Could they be better? Maybe?

    Now there’s nothing pretty about Musk. There’s nothing pretty about the rest of the carbon-sucking automobile industry either. He did change electric cars from something that hippie dippy dudes and dudettes bought that looked like golf carts that nobody wants to buy to something that everyone wants to buy and an electrical engineer like Rick’s dad would own and drive all across this country. He was doing automated driving cars when nobody else was. The major manufacturers wouldn’t even be trying if this evil nasty person weren’t kicking them in their butts just by existing.

    O’Dowd isn’t gonna stop those who would buy a Tesla from doing this.

  6. Suren says

    Jazzlet: Since driving speeds on most highways today are well below 200 mph, it should be a simple arithmetic problem to compare making errors every eight minutes and being error-free for tens of thousands of miles … unless you do most of your driving in Bonneville, UT.

  7. sonofrojblake says

    @seachange, 5:

    If you read Marcus, you know he has a deeply informed sour opinion about people who claim that they make software that does not fail and can’t be hacked

    His opinions are worth listening to, but they’re not infallible. I’ve worked in a number of industrial installations run by software that would meet that description to a reasonable standard. Yes, a sufficiently motivated individual could in principle “hack” the software running those plants, but the effort would greatly outweigh the benefit, were it even possible. It’s possible in principle to make any system really, really hard to hack simply by airgapping it. The reason most of our tech these days is a hacker’s playground is that every fucking thing you buy nowadays absolutely has to phone home every five fucking seconds to tell the manufacturers what colour underpants you’re wearing. Meanwhile there’s a chip with software on it monitoring the engine in my car that hasn’t heard from Honda since they fitted it a decade ago. Nobody has hacked it, nor are they likely to.

    The total sum would get a bus on every corner every five minutes

    The only problem with that is this: you, or one of the other commenters here, might be on that bus. For that reason, I’m going to stick to my car and my own company, thanks. This will be a common attitude.

    If it is preventable by making people […] prove more than once that they are safe drivers like in the UK

    Eh? My mother, resident in the UK her whole life, proved she was a safe driver in 1967, the first and only time she was required to do so.

    A Tesla would have to be very bad to be worse than humans

    Not really. Consider: most drivers -- even BAD drivers -- can drive tens of thousands of miles over the course of several years without being involved in any kind of incident. By all accounts, a Tesla couldn’t be left in autopilot for even a couple of hours without needing sudden and vital support from a human.

    What’s actually true is that a Tesla (or any other SDC) would have to be VERY GOOD to be better than even bad humans. And none of them are yet.

    Now there’s nothing pretty about Musk. […] The major manufacturers wouldn’t even be trying if [it weren’t for Musk]

    Slight self contradiction, despite both things being true.

    I still think that by the time my kids (three and one and a half -- odd names, I know, but we like them) need to learn to drive, there’s unlikely to be any point because SDCs will be mature and available. I still think that by the time any grandchildren I may have need to learn to drive, it will be illegal.

    Consider: sooner or later, a SDC will emerge that is just as good as a bad driver -- i.e. it has a close call about once a month, and an actual accident perhaps once every year or three. Except: a human driver that bad never gets any better. An AI driver will only get better from there. Sooner or later, an SDC will emerge that is just as good as a good driver, i.e. it has a close call once a decade, and might never be involved in a proper crash in a lifetime.

    At some point, half the cars on the road will be SDCs, but ALL the accidents will be happening due to human-crewed vehicles, and this will be provable, because all the SDCs will be recording video all the time. And when that happens, it will be impossible to argue that humans have a “right” to drive, when a technology exists to give them all the freedom of driving, with none of the risk to themselves and others. And driving your own car will become illegal. I can’t wait, personally. It’ll be great. Apart from anything else, it will usher in a golden age of motorcycling -- no longer will riders lying in the road hear the common phrase “sorry mate, I didn’t see you”. The transition’s going to be tricky, though. There’s a certain kind of person out there who will troll SDCs by simply walking out in front of them, forcing them to stop. Trolley problems abound in the writing of the software -- I envisage a black market in hacks to the decision making algorithms. IF pedestrian detected THEN brake… UNLESS Confederate flag baseball cap/tshirt detected… you can imagine your own version.

    @Jazzlet, 3:
    If you can’t infer from the Tesla going wrong every eight minutes and the Waymo going wrong only after tens of thousands of miles that they’re orders of magnitude apart in reliability, then you EITHER think Waymo cars are very, VERY fast, or you’re just being pedantic for the sake of it. I approve of both, btw.

  8. lanir says

    Self-driving cars at this point require more of their drivers than actually driving the car themselves would. In the same way you need an exceptional and attentive drivers education instructor to ride along with very new drivers to catch and counter their mistakes before irrevocable things like collisions happen.

    I’m going to be very blunt here. There are two things that need to happen before anyone should even consider letting a car drive itself. First, the roads will change to accomodate self-driving cars. This will begin with interstates. Until this happens, self-driving cars should be considered about as dangerous as turning on cruise control on a long, straight segment of the interstate and taking your hands off the wheel. You’ll have to grab the wheel again frequently to keep from drifting off the road. The second thing that needs to happen is the car will require two separate networks. If I plug in a USB drive with entertainment products on it (video or music or whatever), I should have zero worries that it will infect the car and allow a hacker to control my driving. As I understand it, this is not the case now for any vehicle.

    I may be missing something else but for now those are the bare minimum items I would need to see before I consider this any better than “giving the wheel to Jesus” or some similar willful idiocy.

  9. Reginald Selkirk says

    What might happen in the interim is some kind of half-way measure where the cars will mostly drive on their own but must have a human at the wheel to override the system if needed.

    This could use a bit more discussion. It’s not as if the car will drive you 90% of the way to your destination, then pull over and tell you “your turn to get us the final bit home.” Rather, situations may arise when the person behind the wheel needs to take control IMMEDIATELY to correct a situation. This means the person cannot relax, they need to be paying full attention even if they are not actuating the controls. This is psychologically difficult for humans, and also brings up another reason why Tesla is worse than the other competitors. Recently, Tesla released a beta version of their software that allowed the person behind the wheel to play games on the console while car was moving. This is an obvious dunderhead move, and is the sort of thing that justifies the criticism they receive.

  10. khms says

    There are two points I’d like to make here (and none are about Musk’s personality).

    First, note that the FSD software Tesla is currently working on is a complete rewrite of the Autopilot software, since Tesla recognized a while ago that Autopilot was a dead end. It couldn’t ever be more than a simple driver assistance system. And from all I hear, while FSD still has a ways to go, it is a pretty dramatic improvement, and it visibly improves with almost every new version.

    And second, I really doubt O’Dowd’s numbers. I’ve seen a number of rebuttals, which consistently see very different numbers. In any case, the theory of a self-driving car software that cannot ever make mistakes is clearly bad fantasy. However, it seems from what I recall that this guy also has an ulterior motive based on Tesla deciding to no longer use his software for development. Mind you, his software has no direct connection to self-driving cars, or even AI. As far as I can make out, it is mainly used to find bugs in C programs and the like. It seems rather doubtful that the choice of that kind of software can tell anything about the quality of an AI system based on the result, or that the authors of such software are equipped to judge it. That’s a very different kind of software.

  11. lorn says

    Unfortunately human supervision is not an answer. Particularly as the error rate decreases. Yes, decreases.

    Picture it this way: Imagine a self driving car that can do nothing right. Assuming you are willing to waste your time supervising, as opposed to just driving the car yourself, you will watch the AI system like a hawk. It is going to be an exhausting, but safe, drive.

    Imagine another self-driving car. This one is great. One serious mistake every thousand miles on average. You could go days without having to take over. Of course, when this event happens you have, on average, 999 hours of what amounts to watching a pot waiting for it to come to a boil. Odds are you are going to be half asleep, bored to tears, completely unable to react in time. So you crash.

    Predictable and reliable failure is better than infrequent but unpredictable and catastrophic failure.

    Fact is humans suck at monitoring situations that are infrequent. That’s why we typically use machines to monitor things like volcanoes getting ready to blow. The guys in lawn chairs all fell asleep, or were too drunk to react.

    Bottom line here is, for the time being, we either drive things ourselves, or we create a computerized driving supervisor. Given that we can’t train a computer to drive we need to fall back and punt. IOW, get used to driving.

    I suspect the answer in the long run is to combine three to five self-driving systems, all with completely different algorithms, protocols, and sensors, and have them form a quorum. In effect, as suggested, having the different systems supervise each other. Complicated. Very complicated.

  12. KG says

    As others have noted, “self-driving car with manual override” is not a viable half-way-house option. Edinburgh in Scotland is currently working up to trialing self-driving buses, but they will (“intially”, supposedly) have two crew. I assume they will have some system for ensuring that the “non-driver” has their eyes on the road at all times, but it still seems like a bad idea to me: city driving is replete with complex problems that require a lot of world-knowledge and judgement to deal with correctly (is that a plastic bag, a dog or a toddler in the road, and do I risk hitting another vehicle or a tree to avoid it?). The possibly viable half-way-house is semi-automated control of commercial vehicles on motorways (freeways), with remote backup. The vehicles stay mostly in the inner lane, and are mostly in sight of fixed cameras. If the vehicle “wants” to overtake a slower vehicle, it radios a request to a human operator who can look at the view from several angles before giving permission. This type of driving is difficult for people, because of its monotony; and drivers are often under pressure to drive even if tired (the only time I’ve ever come close to killing myself or anyone else while driving it was in this context -- I didn’t pull over despite having problems keeping awake, because I didn’t want to lose my job, and woke up to find the truck barreling along the pavement, fortunately at 6am on a semi-rural road).

Leave a Reply

Your email address will not be published. Required fields are marked *