Highways are already scary, self-driving cars won’t help


An amusing anecdote: an engineer is out with the family of a man she was dating, and the father tried to turn on the full self-driving option of his Tesla, so she’s practically clawing her way out of the car.

But on the way back his dad started asking me “you work on self driving cars, yeah?” (I do, I’m a systems engineer and have job hopped between a handful of autonomy companies.)

He started asking me how I liked his Tesla and I joked “just fine as long as you’re the one driving it!” And he asked me what I thought about FSD which he’d just bought. He asked if he should turn it on. I said “not with me in the car” and he then laughed and asked how I was still so scared when I work with this stuff everyday.

I was like “Uhh it’s because I…” But stopped when he pulled over and literally started turning it on. I was like “I’m not kidding, let me out of the car if you’re gonna do this” and my boyfriend’s dad and brother started laughing at me, and my boyfriend still wasn’t saying anything.

His dad was like “It’ll be fine” and I reached over my boyfriend’s little brother and tried the door handle which was locked. I was getting mad, and probably moreso because I was tipsy, and I yelled at him “Let me the fuck out”

She’s a systems engineer who works on these self-driving cars, and she wants nothing to do with it? Does she know something the rest of us don’t?

Apparently, she does. Tesla has been faking demos of its self-driving cars, which I guess shouldn’t be a surprise to anyone following Elon Musk’s hype parade.

A 2016 video that Tesla (TSLA.O) used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer.

The video, which remains archived on Tesla’s website, was released in October 2016 and promoted on Twitter by Chief Executive Elon Musk as evidence that “Tesla drives itself.”

But the Model X was not driving itself with technology Tesla had deployed, Ashok Elluswamy, director of Autopilot software at Tesla, said in the transcript of a July deposition taken as evidence in a lawsuit against Tesla for a 2018 fatal crash involving a former Apple (AAPL.O) engineer.

It’s OK, though, because they were trying to show what was possible, rather than what the car could actually do, even if Musk was claiming the car was driving itself.

“The intent of the video was not to accurately portray what was available for customers in 2016. It was to portray what was possible to build into the system,” Elluswamy said, according to a transcript of his testimony seen by Reuters.

Like, the idea of cars driving themselves and bypassing the fallibility of human drivers sounds nice, but it’s clear that the car’s software can be even more stupid and flawed than people. I wouldn’t want to share the road with these things, let alone be in a car controlled by some engineering gadget.

You know what I think would be far more useful? Software that detected when the driver was significantly impaired. You’re weaving all over the road, or you’re exceeding the speed limit, or it senses that you’re nodding off, and it fires off alarms to let you know you’re not safe, and if you exceed a certain frequency of warnings, it transmits alerts to the police. That would be a smart car, making sure that the driving software in the human’s head was operating adequately.

Knowing humans, though, there’d be a huge aftermarket in mechanics ripping out the safety measures.

Comments

  1. JoeBuddha says

    Self flying planes are fine due to the long distance between units and the fact that it operates in three dimensions. Self-driving cars operate in two dimensions and on crowded roads. Pretty sure AI systems aren’t ready for that, especially with humans driving most cars.
    My stepdaughter worked with her boyfriend on the first DARPA self-driving challenge. Her boyfriend is a former robotics teacher. They know what needs to happen for true autonomous vehicles. Whenever people talk about commercial self-driving cars, he just laughs.

  2. R. L. Foster says

    I was in Pittsburgh this past weekend. The subject of self-driving cars came up as we battled our way through that hilly, twisty city with roads packed with speeding, dueling cars. It’s the sort of city where a 45mph speed limit is little more than a suggestion. 65+ was the norm. We all liked the idea but scoffed at the thought of self-driving cars in such a wild, topographically challenging, urban environment. The only way it could possibly work is if all of the cars were self-driving and were scrupulously obeying the rules of the road. Otherwise you’d have slower moving vehicles mixed in with those with over caffeinated, adrenaline fueled monkeys at the wheel. Imagine thousands of bumper cars on a crowded speedway and you get the idea.

    I really like the concept, especially as I age and the prospect of having my driver’s license being revoked becomes more of a possibility with every passing year. But the reality is very far off for many places.

  3. Eric D Red says

    Aircraft do self-fly, and pretty well, but they have a few advantages:
    -all the input comes from more reliable and already-data inputs, like GPS location, altimeter, radar, and so on. Cars are working with things like interpreting geometry signs, and road markings from camera information. That’s something the brain does well, but software is still struggling with.
    -Nearby aircraft, whether (well trained) human or computer controlled, behave fairly predictably. Not that many drunks or sleepy or whatever pilots out there, and rarely within seconds of potential impact.
    -very few pedestrians at 10,000 feet
    -very large teams of skilled engineers with years of experience building this up in the aircraft world. Tesla, not so much.

    But all that said, I think self-driving cars have the potential to be far safer than human drivers. Something like 90% of car accidents are due to human error. We take at least 0.2 seconds to react in the best of conditions, while computers run much faster. We get distracted, we fall asleep, and so on.

    However, the technology isn’t there yet. It will take many more years to get there, if we have the desire to do so and we can stomach the transition period. There are so many political, commercial, and human-factors in the way of that. I could talk about that for hours, but the day job calls.

  4. whheydt says

    Good example of why programmers have no respect for computers and the software that runs on them. We know far too well what can (and does) go wrong.

    My father worked as a field service engineer with the Air Force for a number of years. Then, and for a long time afterwards, he flatly refused to fly. He knew too many ways for airplanes to fail.

  5. cgm3 says

    A common science fiction trope is for all cars in a city (or similar region) to be slaved to a central control which guides the vehicles, with serious penalties for “going manual”. This seems to be the only effective way to have “driverless” cars, but I don’t know if the current technology is up to the task, and doubt it could be implemented anyway. (“Drivin’ mah car mahself is mah right as a ‘Murikan citizen! FREE-DUMB!”)

  6. Artor says

    When ALL the cars on the road are driven by computer, I have no doubt it will be safer than things are now, but mixing human and robot drivers is a recipe for disaster. Human drivers are just too unpredictable, and human reflexes are often faster than most current robots. My car has some automatic braking features I wish I could shut off, because they make the car do unexpected things when I am already on top of the situation.
    The rest of that story is a nightmare though, with her BF dismissing her expertise in the field and belittling her self-preservation instincts, while his dad is gleefully violating her boundaries and locking her in with the lunatics. I hope this incident serves as a wakeup to her soon-to-be ex-BF and his Tesla-worshipping family.

  7. HidariMak says

    To be fair, a lot of what exists today would’ve been deemed “impossible” at one point. Smartphones are one example, since I can’t imagine the idea of a pocket computer being believable to most people 30 years ago. And computers do have a much faster reaction time than humans for their selected tasks. Just give technology the time to catch up to our aspirations, and reject the results from shysters.

  8. anat says

    If Musk claimed the car was self-driving when it wasn’t, how far was he from what Elizabeth Holmes did?

  9. Akira MacKenzie says

    But… Elon though the Johnny Cabs in Total Recall were soooooo cool. They HAVE to work! because something that cool can’t not exist!

  10. larpar says

    I generally brake for squirrels (Sorry, Iris). Occasionally I don’t because it might cause a crash.
    I’ve hit a deer before. I might have been able to avoid it by slamming on the brakes, but the car behind me would have rear ended me.
    How do self-driving cars handle these mini trolly problem? Although I haven’t experienced it myself, I can imagine a self-driving car to have to decide on a full blown trolly problem.

  11. says

    As things stand now, I think I would feel safer putting my life in the hands of the autopilot from Airplane than any self-driving car from an Elon Musk company.

  12. d3zd3z says

    Unfortunately, Musk (and Tesla’s) blatant fraud around self driving is hurting the really good work that other companies are putting into this. Other companies realize that just cameras isn’t enough to safely navigate a vehicle. They also are working on controlled trials, using trained drivers ready to take over as soon as something goes wrong. This is a better approach than Tesla, which is basically using their customers as the test drivers (with no training). There is a reason that other car manufacturers aren’t giving dates as to when they will have self driving capabilities.

    We already have some of these self-driving cars, in very limited situations, and they are doing pretty well, but there is still a long way to go.

  13. IX-103, the ■■■■ing idiot says

    I think we’ll see cost effective, true self driving cars eventually, but not from Tesla.

    I mostly agree with woman in the article’s summary of the state of self driving vehicles. Almost all companies working on it are not there yet. I think Alphabet is the exception, as they not only had an early start, but in true Google fashion, threw money/engineers/hardware at the problem until they reached a (clunky, unpolished, expensive) solution. So not only are other companies playing catch-up, but the they are trying to make the system affordable to integrate into everybody’s cars. Tesla is in the same boat, though, since they are already shipping units they are more focused on reducing units cost instead of improving accuracy and safety. So Tesla is falling behind in algorithm development (and gradually finding out the hardware they shipped is not the hardware they need).

    I think self-driving technologies (other than Tesla) have made driving safer. Adaptive cruise controls prevent rear-ending the car in front of you by triggering the brakes when you get too close. Lane tracking is used to notify drivers when they are driving out of the lane. This is not nothing.

    Also, trolley problems belong to the realm of philosophers. The key to a good self-driving car is to avoid ever being put in a lose-lose situation. That may mean slowing down if a car is following too closely until they reach a speed where the following distance is safe.

  14. says

    I knew an engineer for BMW who told me that whenever a car goes to the dealer, it automatically has its software loadout updated. He was very pleased that they seldom bricked a car. Meanwhile, I was freaking out because it meant that a test/release cycle was absent. I’m expected to jump into a newly upgraded car and regression test it at 70mph on a highway? Software bro, please!

    It seems to me that humans aren’t great drivers and neither are the computers we program to the task. Accepting a certain casualty level is probably the best option.

  15. says

    feralboy12:
    As things stand now, I think I would feel safer putting my life in the hands of the autopilot from Airplane than any self-driving car from an Elon Musk company.

    I’d feel safer putting my life in the hands of ED-209. Or one of the fine products from RobCo.

  16. BACONSQAUDgaming says

    I have a Tesla Model 3, but wasn’t willing to pay the extra money for full-self-driving. However the car does come with a beta version of it. If you enter your destination and put it into self-drive, it will maintain the speed you are travelling, slow down (and if necessary stop) to match the speed of the cars in front of you, and steer correctly on route (highway is better than city). What it won’t do, unless you have FSD, is change lanes to pass slower drivers, stop for traffic lights/stop signs, and change speed to match changes in speed limits.
    I’ve experimented with it a little – usually to amuse my kids and their friends by driving “using the force”. However I don’t trust the technology enough to want to buy FSD, and I prefer to be in control anyway. I like the idea that if I was injured and unfit to drive (I don’t drink), that I could direct the car to take me to the hospital. If I don’t want to drive, I’ll carpool or take public transit.

  17. Peter Bollwerk says

    #18
    I also own a Model 3, and also have no interest in buying FSD.
    But I do find the Autopilot (assisted cruise control) to be better than similar features in other cars I’ve driven. I still don’t completely trust it, but it works pretty well. Nevertheless, I will always remain attentive and in control.
    I honestly don’t understand why people are paying thousands of dollars for FSD when it’s still so far from level 5 autonomy. I won’t even pay $60 to beta test a game.

  18. billseymour says

    Marcus @16:  it’s not like a Microsoft Windows update.  My understanding is that embedded systems code goes through much more agressive testing than does any shrink-wrapped software you’d buy at the computer store, or anything a company would write for its own use.  But I’ll bow to your expertise if you’ve worked with embedded systems before.

  19. Artor says

    My Subaru has some driver assist tech, as I mentioned above. It has some lane-maintaining features that are laughable. If I was crazy, I could set it to automatically stay in a marked lane on the highway. It doesn’t drive smoothly though: it bounces off the painted lines like a Roomba at 65 mph. If it interprets a skid mark or a piece of trash as a painted line, it takes control of the wheel and lurches the car away from it. I disabled all of that shit the first day I got the car.

  20. says

    After millions of miles on the road, I know not to trust capricious, ignorant, selfish human drivers. And, our organization’s Aerospace Engineer members tell us how foolish it would be to try to create a flawed, limited, human designed vehicle that could out think, out maneuver irrational human traffic. The original designers of and the current engineers of Tesla do a great job. But, they, too, know Musk Melon has always been a huge bullshitter. They and we shouldn’t believe anything he says.

  21. Rich Woods says

    Today I got the taxi home from an appointment. It’s the first time I’ve been in a car for about three years (thanks, Covid) as I don’t own one myself. The journey was a fine reminder of how rarely people approach junctions correctly, how infrequently they indicate properly at roundabouts, and how stupidly they position their own car even when they can clearly see that another car is intending a specific manoeuvre and has to wait for them. (None of this is a reflection on my taxi driver, BTW. We were sharing outbursts of shock at the idiocy shown on a simple ten-minute journey at a quiet time of day.)

    I live in a quiet residential area built in the 1890s. When the streets were laid out they were of ample width to accommodate the Victorian traffic of the day, ie bicycles and horse-drawn carts. Today there are cars parked on both sides of the road, leaving just enough room to safely drive one car down the centre at maybe 15mph (or very little room to very carefully drive a van or delivery lorry at half that speed; wing mirrors are frequently clipped). Any car facing oncoming traffic has to hope for a handy gap in the parked cars or is forced to reverse to a spot (often a junction) where there is room for the other vehicle to pass. Delivery lorries in particular do not enjoy having to reverse around right-angle bends, especially if a resident has been a little careless at parking their car completely within the delineated areas. There’s also a primary school nearby, so several times a day there are little kids running up and down the pavements, burning off energy.

    If Tesla or one of the other car companies can demonstrate that their self-driving cars can negotiate that environment safely, day in, day out, I might begin to have some hope for them. Until then, I’m going to remain at my default techie setting of ‘Fuck that for a game of soldiers’.

  22. antaresrichard says

    November 24, 2022 Thanksgiving Day, eight car pile-up caused by a Tesla Model S on the San Francisco Oakland Bay Bridge. The driver stated that Tesla at the time was in Full Self Driving mode.

  23. Dave says

    My 2021 Honda has many of the safety features you mention. It warns me if I try to change lanes without signaling or if another car is in my blind spot. It notices if I’m driving erratically and asks me if I want to take a break and get a coffee. It automatically applies the brakes if I fail to. It works very well, and while the warnings can be annoying at times, they are very helpful. But it’s a far cry from “self-driving.”

    We’ll have self-driving cars one day, perhaps in the next few years, but the technology is not there yet, and deploying it in mass-market cars is irresponsible.

  24. milesteg says

    As a software engineer, there’s zero chance I get in a “self-driving” car anytime soon.

    It’s one thing to build a system that can (mostly) follow road markings and read speed signs to adjust speed. It’s orders of magnitude more difficult to build a system that can actually handle even 90% of what a human driver has to handle. Several more magnitudes of difficulty to handle the next 9%, and several more to handle the next 0.9% and so on an so forth.

    Computer vision/machine learning systems like Tesla is trying to use are merely extremely sophisticated conditional logic. Instead of a programmer writing out the conditional logic, we’ve created systems that can determine “if-else logic” based on algorithms and data input.

    In a traditional program, you have to test to make sure the programmer’s logic is right. You have to examine each edge case and corner case to understand how the system handles input to generate output.

    In “machine learning” you have to not only test to make sure the programmer’s logic is right (for what they wrote to process the input), but you have to test that all the input data is right. That is a far more difficult task because the whole point of feeding in the data is you don’t have the capacity to build the logic yourself.

    Ultimately, what this means is you can’t really fully test the system. This is OK for systems which are not safety critical (like facial recognition) or can be confirmed (like diagnostic software), but it is, to be quite blunt, reckless to use it in a real time, safety critical system.

  25. Daryl Lafferty says

    I own a Tesla Model 3 with FSD (“Full Self Driving”). I turn it on occasionally, mainly out of interest since I’m kind of a nerd.

    It is far from being as good as a human in most ways. The driver needs to be paying attention and ready to take over at any time, but that’s no different from driving manually. Other than the famous “phantom braking” (which the driver can over-ride), it doesn’t make any dangerous sudden moves.

    I think the engineer that panicked was either exaggerating, or it’s a made up story. Turning FSD on briefly with the driver paying attention is as safe as driving manually and she would know that.

    As a comparison, Waymo has a driverless taxi service in Tempe and Chandler that really does a good job. Seriously, it drives as well or better than most drivers on the road, and I don’t think it has had any serious accidents where it was at fault over the last 2-3 years of service.

  26. ardipithecus says

    If number of patents is an indicator or development activity, Tesla isn’t even in the top 5.

    New patent applications in 2022:re autonomous driving systems, top 5

    Toyota: > 1000
    Honda > 800
    Waymo ~700
    Hyundai > 600
    Robert Bosch ~ 600 (a tech and software developer, including control systems)

  27. says

    So Musk has been flat-out lying about his FSD cars the whole time, and yet no serious effort was ever made to get those dangerous, unproven cars off the road? Thanks, Republican de-regulators!

    The driver needs to be paying attention and ready to take over at any time, but that’s no different from driving manually.

    The difference is, if the driver isn’t already exerting control, and doesn’t take over until after they’ve realized the car is making a mistake, then the driver simply won’t have enough time left to avoid whatever situation the car has ALREADY steered them into.

  28. says

    Turning FSD on briefly with the driver paying attention is as safe as driving manually and she would know that.

    The operative word here is “briefly.” Just like it’s safe to take you hands off the steering wheel BRIEFLY.

  29. whywhywhy says

    How much are the limitations of self-driving cars due to the way we have built our roads and car infrastructure? It was not designed for computer driven vehicles.

  30. says

    I had a conversation with one of the heads of automation at one of the other major car companies, and he did not have anything good to say about Tesla’s ‘self driving’ mode, and gave a good rundown of the issues, similar to d3zd3z’s comment above. i.e. The issue is less self-driving cars in general, but rather Tesla’s approach in particular.

    I suspect we’re not far off from self driving cars from other companies that really are doing the R&D the right way. And I have no doubt that they’ll be safer than human drivers on average. I mean, they can monitor multiple streams of data simultaneously. They pay attention 100% of the time and don’t get distracted fiddling with the radio dial. Their reaction times can be much faster. Etc.

    Sure, everyone talks about edge cases. But edge cases are problems for humans, too.

    I read one article about self driving cars that stated, “I don’t know about you, but I worry about being on the road as a pedestrian or driver or if vehicles cannot recognize pedestrians and other objects with 100 percent accuracy!” Guess what – humans can’t always recognize pedestrians and other objects with 100 percent accuracy, either, but we still accept the risk.

    Yes, there’s still more R&D to go before self-driving cars are truly practical. But the measure of safety isn’t perfection. It’s just better than people. And people have a lot of limitations.

  31. DataWrangler says

    I may have missed it in the 33 comments posted so far, but no mention that her concerns (justified or not) were dismissed, her wishes ignored, and her control over her situation removed.

    I hope that’s because that is all mind-numbingly obvious, but I doubt it.

  32. numerobis says

    Lies told in 2016 are indicative of how honest the people are, but not particularly about how good the system is today.

    Oh. The system is still shit. Oops.

    The cruise control is nice. The lane-centering is ok except it means it lines you up to hit every pothole.

    The “Full Self Driving (TM)” that then claims to just be a driving aid (ie not actually full) is by all reports in the uncanny valley — just good enough to lull you into complacency before it tries to kill you.

  33. numerobis says

    DataWrangler: I kind of expect the story ends “and then she never talked to that asshole or his family again” but yes, I noticed it.

  34. drew says

    Don’t confuse huckster Elon Musk with technology that could minimize the harm that automobiles do. That’s would be like dismissing automobiles in toto because Henry Ford also published The Protocols of the Elders of Zion.

    @ 3. R. L. Foster: Pittsburgh is one of the places where multiple companies are working on autonomous vehicles.

  35. John Morales says

    “Highways are already scary, self-driving cars won’t help” comes with the caveat that the technology is yet to come.

    If it did work, then it would help.

    After all, anyone who has ever been on the road knows that half the drivers are below average, and the average is not great in any case.

    So, nothing wrong with the principle or the aspiration of autonomous vehicles, in my view.

    (Mind you, I’m a bike rider — never got a car license — and I doubt autonomous bikes will ever be a thing; might as well adapt and get into a metal box once that happens)

  36. mcfrank0 says

    I just read an article on Boston Dynamics moving on the next stage of research with its humanoid Atlas robot.

    The author speculated a lot about what this means, but it’s nice that the actual researchers explain exactly where the technology is at at the moment. From what I see, we’re still in Blockworld to model the environment. Processing no longer takes hour (or days) for each move.

    The author felt it necessary to discuss Elon Musk’s Optimus proposal for a humanoid robot and then chose to pretend that the “prototype” Musk showed his audience was not a human dancer in a costume.

    Whatever the value of the article, the two videos from Boston Dynamics, one for sales/entertainment and one behind the scenes, do not disappoint.

    https://www.theverge.com/23560592/boston-dynamics-atlas-robot-bipedal-work-video-construction-site

  37. silvrhalide says

    The unsafest thing about that entire car ride were the asshole BF and his asshole relatives. Setting aside for the moment the obvious and well-documented problems with FSD vehicles, Systems Engineer GF needs to DTMFA. Her odds of survival will go way up because
    1) never-married women statistically live the longest of any cohort (never married, married, divorced, widowed) and also have the highest satisfaction scores with life overall. By contrast, never married men drop like flies, whereas married men actually live the longest. But divorce is not good for longevity or overall health & happiness for males or females.)
    2) her blood pressure will drop like a stone when she doesn’t have to deal with the contemptuous, dismissive douche armada
    3) when the douche armada finally kills themselves from making shitty decisions irrespective of rational input/facts, she won’t be in the goddamn car.

    Also, WTF is with the BF?! Is this dude bucking for the Aziz Ansari Creepiest Date EVAR Award? GF says she isn’t comfortable being in the car with FSD mode engaged. Does BF respect her wishes? Does he stand up for his GF, tell his dumbass dad, “hey, knock it off, she said she wasn’t ok with this”? NO, he and his asshole relatives double down on dismissive misogyny AND bring up faux concern for the younger brother when Systems Engineer understandably gets pissed off and starts cursing. Are any of the male assholes in the car engineers trained and working in the field on this technology? No? But they ARE quick to dismiss the educated opinion of the person who IS, presumably because she possesses lady parts, so her opinions are, by default, suspect.
    Systems Engineer, wherever you are, ABORT ABORT ABORT. DTMFA. There are over 8 billion humans on the planet, a significant subset of whom will be better for you than TFG. Spineless BF IS NOT GOOD ENOUGH FOR YOU. RUN!

    @2,5 A number of aircraft have autopilot features, for when the aircraft is already airborne. I have yet to see an automated system for aircraft for landing and liftoff, which are the most dangerous parts of any flight.

    Like Artor in #21, my Honda’s lane assist can’t distinguish between painted lines, skid marks or tar seams in the road and its LIDAR freaks out with oncoming traffic on a curvy road–the LIDAR tells the car to brake (I get the flashing BRAKE warnings because the programming was apparently designed with only straight roads in mind) because it senses the oncoming traffic as an impending collision without taking the road curvature into account. For that matter, the optical sensors used in Tesla only work if there is a significant contrast in objects, which is why it fails to detect light-colored vehicles in dawn/dusk/white sky conditions. (The technology has been around since the 90s and is a known problem.) Keeping all that in mind, FSD is more like “license to kill”.

  38. call me mark says

    Hey, I’ve got an idea. Instead of having self-driving cars, if you don’t want to drive yourself you could hook your car up to a bunch of cars all going the same way and then just the lead car has to be driven. You could call it a “train” or something.

  39. KG says

    That’s would be like dismissing automobiles in toto because Henry Ford also published The Protocols of the Elders of Zion. – drew@37

    If only we had!

  40. Dunc says

    @42:

    I have yet to see an automated system for aircraft for landing and liftoff, which are the most dangerous parts of any flight.

    Autoland has been in use in commercial aviation for decades (particularly in Europe, less so in North America), although it’s still not generally the prefered option, and it does require an airport equiped for CAT III ILS operations.

    Also the Garmin emergency autoland system achieved FAA certification for some GA aircraft in 2020.

    Airbus has a very well-developed Autonomous Taxi, Take-Off and Landing project.

  41. says

    billseymour@#20:
    Marcus @16:  it’s not like a Microsoft Windows update.  My understanding is that embedded systems code goes through much more agressive testing than does any shrink-wrapped software you’d buy at the computer store, or anything a company would write for its own use.  But I’ll bow to your expertise if you’ve worked with embedded systems before.

    It can be a lot like Windows update for the same reason windows update is like windows update. What we don’t know is if the car manufacturer only uses the same components in the same way, and does not have version diffusion going on. If you’ve got a situation where one car may be running 3 different door latch controllers, then all bets are off. Generally the way these systems are implemented is as a common, private, messaging bus – each component talks to the others as it wants to. That is fine if and only if all the components behave. If a component goes haywire because of version incompatibilities, or a hardware failure unique to it, then you can get some unusually difficult errors to track down.

    My experience with embedded systems has mostly been from a standpoint of securing them. And, let me say, the designers of embedded systems are the absolute worst. Because all their assumptions begin and end with “we are on a private bus” and, unfortunately, that cellular internet connection that the R&D guys just put on the private bus means it ain’t private any more. When you audit the systems embedded system designers build you find things like PLCs with no authentication, meaning that anything that can generate messages to or from it has control over the component. That’s why “just reload the software” isn’t the danger-sign, it’s the symptom. The danger sign is that the coders in R&D have the ability to add new features and stacks of features to your load-out and you have no way of knowing if you want it, if it compromises your system, and/or how it impacts the overall reliability of the system.

    Historically, “reload the software every time” is an indicator of a system that was not designed with reliability or manageability in mind. It is indicative of a “just ship it and we’ll tweak it until it works” model – in that sense, much like Microsoft Windows. The number of possible intersections of interactions in Windows makes it more or less impossible to test. Put differently, I want my car engineers to be so confident that they got the code right the first time that there are no “dot releases” at all – just a rock solid new version every year, if it’s necessary.

    A lot of these issues can be dealt with by carefully layering abstractions. I.e.: your door latch controller may be a different version but your bus API for door latches uses a minimal, guaranteed, set of features. That takes design discipline I am willing to believe aerospace and car manufacturers are capable of but Microsoft is not. Or, more precisely, the Microsoft ecosystem is not – remember, Microsoft does not own or control the device drivers that load with gpus or USB devices: “here’s some rando new functions to load into kernel space and give complete bus-mastery to!” I immediately get suspicious when I see a system that does not appear to have followed any particular design methodology, because that tells me that the lunatics are running the asylum and folks in R&D feel comfortable yukking it up by mapping the steering left/right onto the volume knob, then forgetting to comment out that code in a major release.

    The main thing that triggers my violent revulsion reaction when I hear about “the software gets reloaded every time” is from a security perspective. For that to work, it means that the reload process can completely bypass all protections in the system. Which means that if there’s a flaw in the embedded code in the WIFI stack, some hacker by the side of the road can do a code injection attack emulating the management plane and take the whole thing over.

    I saw a demo at CANSEC west back in 2002 or so, where a guy exploited an overrun on a cell phone’s cell controller software stack. How did this happen? The hardware guys who made the cell chip weren’t software engineers, and just hacked together a driver and APIs that worked, but had security holes. They weren’t worried about security flaws because what the heck, the chip’s gonne be soldered to the backplane of the cell phone and it’s going to be in a private environment. Except it’s not. So the guy’s hack took over the cell controller, which had bus mastery, then went over to the process table (the phone was running a version of linux, but that doesn’t matter) and assembled a process running with uid=0. It was pretty slick – you run this .com file on a laptop and boom you have a root prompt in a shell running on that guy over there’s phone.

    The “just send the software down and reboot” model is very popular in phones, and you can see the qualitative results of that management philosophy.

  42. says

    @KG@44:
    One problem is that cars were seen as an elite solution for the problem of urban horse poop. I did a posting on that a few years ago – NYC had horse poop mountains occupying entire blocks, 6 storeys high. The answer, of course, is trains. But for moving commercial goods you then have people pushing cartloads of stuff on the sidewalks. (The horses generally acted in the role we now assign to panel trucks)

    Cities look good because we collectively made a lot of decisions that made them less pestilential than they were. Basically, large numbers of humans are disgusting.

  43. silvrhalide says

    @45 Cool link–thanks!
    BUT
    Autoland is rendered useless by a tailwind of more than 11.5 mph and cross or headwinds of more than 28.5 mph! So it’s great for fog but useless in pretty much any conditions other than “dead calm” and “mild breeze”.
    From your link:
    The autoland system’s response rate to external stimuli work very well in conditions of reduced visibility and relatively calm or steady winds, but the purposefully limited response rate means they are not generally smooth in their responses to varying wind shear or gusting wind conditions – i.e., not able to compensate in all dimensions rapidly enough – to safely permit their use.

    A partial list of US airports where Autoland would be functionally useless:
    JFK, LaGuardia, Newark–coastal airports, too windy
    Boston Logan–coastal, too windy
    BWI–coastal, too windy
    Asheville NC–mountainous airport! But the wind shear from the air currents hitting the Appalachians would make it TOO WINDY
    LAX–coastal airport, too windy
    Chicago O’Hare–Chicago! The Windy City! TOO WINDY
    any airport in Florida–too windy

    The Garmin program seems more like a concept program at this point than anything actually useful. I sure hope it works better than the Garmin GPS, which among other failures, wanted me to turn right onto railroad tracks. Um, no HAL, not feeling suicidal today.

    The Airbus program has 500 flights using the technology, 450 of which were for data gathering purposes… not exactly filling me with confidence.

  44. says

    Has anyone else considered the fact that our road/traffic/sign/lane system design is so primitive and complicated and inconsistent (no real road safety design improvements for about 50 yrs) that it continues to facilitate accidents and deaths by people who are either not equipped to handle it, or are busy with the ‘infotainment’ screens all over their dashboards to pay attention to the chaos that is our road system?

  45. Pierce R. Butler says

    shermanj @ # 50: … our road/traffic/sign/lane system design is so primitive and complicated and inconsistent …

    and sabotaged.

    I could show you a particular location where an established shopping-center developer politicked its home municipality’s urban management to monkeywrench a mandated bridge to a competing development immediately across a jurisdiction line (and an interstate highway). Short version: cars come down a ramp directly into a traffic circle, not how the engineers say to do it… (Ironically, this endangers vehicles on the string-pullers’ side of the connector – they’ve apparently self-sabotaged too.)

  46. Kagehi says

    Sadly, this isn’t just a “Tesla problem”. One of the hosts of TYT commented on here new car, which had “breaking assist” – its supposed to help prevent you accidentally rear ending another car, but while this may “technically” be a good idea, its glitchy as hell, and randomly gets confused by marks on the road, shadows, etc., which trigger it to put on the breaks “in the middle of traffic”, and at risk of the car behind not a) having the same feature, and b) reacting fast enough. This is the same insane behavior in the above video, which caused an accident, and its not even in a “self driving” vehicle, just one they tried to make “safer”.

    They seem to have rushed this crap onto the market, and lied about how well it worked, across the board – all while research groups, like DARPA have been saying, “We are not there yet, and there are still serious issues, even if some happen rarely.”

  47. cvoinescu says

    I don’t understand why people even bring up airplanes in comparison to self-driving cars. The moment cars will have their own, empty, straight, super-wide, accurately mapped lane with no chance of any obstacle, total grade separation, clearance from Road Traffic Control to occupy the lane, all cars in that roadspace under Road Traffic Control too, the car in front and the car behind at least two minutes away, and all cars driven by drivers with hundreds of hours of training, two drivers each, both paying attention and ready to take over from their respective self-driving systems, I would totally not hesitate to trust multiple redundant self-driving computers cross-checking their decisions based on multiple redundant sensor inputs controlling multiple redundant actuators, as long as they had their certification and service records up to date, the two drivers had up to date certifications too, and, of course, only if the software was written, reviewed, tested and audited to what I consider reasonable standards (i.e., from before Boeing talked the FAA into allowing them to mark their own homework).

  48. lanir says

    The engineer story doesn’t sound that unusual to me. If you have knowledge of a field most people don’t you’ll find any number of everyday idiots who are happy to assume you know less than them. About your field. Which you’re paid to be good at. If you want something you can grasp onto a bit better imagine if you were a farmer and every tenth person or so you mentioned this to began to regale you with advice based on how to win at a fake farming videogame. Treating the game as a how-to. Or something similar with mechanics or factory work.

    As far as embedded controllers go, sometimes the programmers are pretty shady people. I’ve met three so far across 2 projects.One was a bit over 20 years ago and challenged me and his girlfriend to figure out yp (unix analog to windows domain – centralized management of login info plus a few side benefits, yp is outdated now). We had some questions but he brushed them off as an “exercise for the student” and just handed us a dozen printed pages worth of manual pages and articles about it. We ended up telling him they were about 3 different solutions (he hadn’t noticed that) and eventually got him to admit he had no idea how it worked and hoped we’d tell him so he could implement it at work. Pure genius. The other two I met a few years ago and they worked on widgets that managed fuel tanks for gas stations. They phoned home quite literally using a VPN over a cell modem connection but anyone who could get past that could just open an unsecured telnet connection and have system access. The VPN was run by the telco so any server hack could expose the whole business to things like fuel trucks sent out to hundreds of locations to fill already full tanks. And you get server exposure via internal bad faith employee or anyone at a customer site willing to pull a SIM card.

    About self-driving cars, I still think at least part of it will involve changing roads. Right now you can tell no one is serious about this stuff being in widespread use because they’re not making press releases about what changes need to happen to roads. Tesla actually removed sensors a few months ago? Yeah, no. Not serious about this shit at all. Not even a little bit.