Every billionaire is a con artist


And there are few more phony than Elon Musk. Here’s a good interview with Edward Niedermeyer, who has written a book about the way Musk built a car company on some good engineering (not done by him) and a whole lot of lies (his contribution). In particular, his so-called self-driving cars are killing people.

People such as Walter Wong and Josh Brown have died. The Tesla fans blame them. And Tesla basically says, These people chose to be distracted. They chose to operate the system in a place we told them isn’t necessarily safe for it. Therefore, it’s all on them. And the NTSB said, No. We know from literally decades of behavioral psychology—particularly those that look at safety critical systems and partial automation—that if you put someone in what’s called a vigilance task, where they’re just monitoring this automation, and they just have to be there to jump in and take over when something goes wrong, which over time gets more and more rare, it is not a question of good drivers doing okay and bad drivers doing poorly. It’s not the same as driving. It’s a task fundamentally different from driving. It’s one that we as humans are actually less well evolved to do than unassisted driving. There’s no moral or skill factor in this. Inevitably, every human who is put in that position will eventually, given enough time, become inattentive, then given enough time, the system will find something it can’t deal with. Basically, people are gambling with these sorts of numbers. They’re playing roulette in a way.

That’s an interesting point. Our brains don’t have a good autopilot — our attentiveness tends to wane if we aren’t seen constant feedback to keep us tuned in. I know that when I’m on a long distance drive, my brain needs constant reminders to refocus and stay in the present and the task at hand. If I don’t have that, I know I’ll lapse into daydreaming and thinking about totally irrelevant stuff.

I suppose if we had a really good self-driving system, we could replace the need for minute-by-minute attention to the road with a system that delivered random electric shocks with a voice over saying “wake up, dummy”, but I don’t think it would sell well, and if mandatory, would fuel a robust market in YouTube videos instructing you in how to rip it out.

When Elon Musk promotes self-driving cars, though, he’s being openly fraudulent.

For me, this is where Tesla crosses into unambiguous fraud. First of all, it’s Level 5 autonomy, which you have to understand nobody in the space is pursuing. Level 5 means fully autonomous, with no need for human input ever. But operating anywhere—basically anywhere in the United States, anywhere a human could drive, this system needs to be able to drive. This is the core of its appeal as much as, Oh, we’re developing this generalized system. Everyone else is tied to these local operating domains with mapping and all this other stuff, more expensive vehicles. We don’t have time to get into all of the ways in which this is an absolute fantasy. Anybody who’s serious in the AV sector is just amazed that this even has as much credibility as it does. What it comes down to is that he’s identified not a plausible fraud or vision that he is selling, but an appealing one. People believe it because they want to believe it. They want to believe that they can buy a car—it gets back to that frisson of futurism—without having to change any behavior. You’re just gonna go out and buy another car. It’s gonna belong to you like any other car. But unlike other cars, it’s going to drive itself anywhere and everywhere. And that’s absurd. With a camera-only system, technically, people call it AI. People call it machine learning. Fundamentally, it’s probabilistic inference. And when you think about that term, probabilistic inference, you think about something that could kill you at any second. Does it sound like a good combination?

No, it doesn’t. That’s a terrifying combination. Even worse, imagine being on a freeway with thousands of other cars, all relying on those odds. That’s not just you rolling the dice, that’s everyone doing it simultaneously, trusting that no one will get snake-eyes.

This is the principle that drives the profitability of casinos. Even tiny advantages in the odds of a chance event, when iteratively repeated by a great many people, converges on inevitability. Hey, that’s also a factor in understanding evolution!

Comments

  1. snarkrates says

    Far be it from me to dissent from any article pointing out that Elon is a charlatan. However, with regard to self-driving technology, one needs to point out that humans don’t do so well with driving themselves. In fact, one of the main driving forces for self-driving cars is that human error is the leading cause of accidents. If self-driving technology is implemented in an intelligent manner, cars will not speed. They will not tailgate. They will maintain awareness of obstacles and traffic around them. Right there, they are already head and shoulders above human drivers. And that is just with present technology.
    The biggest problem with autonomous driving is that if there is an accident, it is difficult to determine who to blame. As a result, autonomous driving technologies will have to meet ridiculously high standards–being as near as possible to perfect. This may actually retard the uptake of a technology that could save tens of thousands of lives a year. It may even wind up as a driving factor behind allowable error rates in microelectronics.

    So, yes, Elon is a charlatan. Yes, he’s promising much more than he can give. No, that does not mean that in this one area that he is making things worse. With respect to driving, I, for one, welcome our new robotic driving overlords.

  2. barbaz says

    Not gonna disagree on the Elon Musk part, but… would you rather be on a freeway with thousands of cars driven be people? Have you ever met people? So computers sometimes make deadly mistakes, sometimes in situations where a person wouldn’t have. But people make deadly mistakes all the time, too, and often in situations where a computer wouldn’t have. I guess the whole issue is a bit trolley-dilemma-y, but all in all I’d rather just have the driver that kills the least amount of people.

    Or get rid of cars entirely, I would be fine with that.

  3. Marshall says

    I agree with #1…the scare term of “probabilistic inference” is what people do too. I’m not disagreeing with the general conclusion here–that the systems are not robust enough to deal with actual driving conditions. But I feel the argument provided in the quoted paragraphs is weakened by the claim that only machines have a nonzero probability of deathly mistakes. Try this one for change:

    With a human-only system, technically, people call it “Intelligence.” People call it learning. Fundamentally, it’s probabilistic inference. And when you think about that term, probabilistic inference, you think about something that could kill you at any second. Does it sound like a good combination?

  4. gijoel says

    @1

    all in all I’d rather just have the driver that kills the least amount of people.

    At the moment that’s humans. AI is nowhere near the point where they can replace people. I’d also point out the computers fuck up all the time. How long has it been since you’ve had to fix a machine by turning it off and on again.

  5. Akira MacKenzie says

    @ 1 And just who programmed those self-driving cars? The same fallible humans who you don’t think should be driving.

  6. garnetstar says

    I vote for 1) get rid of cars, 2) if we must have them, people should drive.

    At the moment, software in general is no where near what people can do, even unskilled and inattentive people. There was a movement for a while to have computers fly commercial planes, no pilot. If you wouldn’t care to be on such a plane, you can see why a computer driving a car isn’t such a good idea either.

    Neal Stephenson, a pretty prescient guy, had this in one of his books too (The Three Californias). All vehicles were computer driven, and the number of fatal crashes didn’t diminish. They called the crashes “something in the silicon”.

  7. birgerjohansson says

    Ah yes Neal Stephenson. Now, that is a visionary I respect!
    As for self-driving cars, it would have to work like something like an anthill, with very simple rules that generate overall order.

  8. nomuse says

    A pity we can’t swap roles; have the humans do the minute-by-minute tasks, with the vigilance task being left to a machine.
    Except that’s what we have. And we put it in an airbus that decided that what the humans were doing was putting the plane in danger…so it took them out of the loop.

    I still remember running a sixty foot boom lift (and, yes, the same control paradigm is still in place in similar machines). To prevent injuries from someone bumping the control and setting the lift into unexpected motion, there was a delay built in to the loop. You had to hold the control for about half a second before the motors would engage.

    On an extremely sensitive, proportional control running a large flexible piece of machinery. The designers had apparently never met any humans. As much as I tried, when I was trying to get that last foot of position I’d nudge the control a little…nudge it a little more…nudge it a little more…the lift would finally engage at full speed and lurch a yard or two.

    All of those anti-this and auto-this are built to the same assumptions; clean road, sensors all working correctly, full, robust motions, and the human component is the only point of potential failure. Then someone tries to ease into a congested parking lot on a rainy day or steer around the kid that just ran into the middle of the road…

  9. Akira MacKenzie says

    Part of the tech industry ethos is “fake it until you make it.” The idea being that you give a less-than-genuine account of your upcoming product’s development for the public and your investors (especially to your investors) until your programmers and engineers can make it work–all they need is a few million dollars more. Musk operates on the same notion, that the very laws of science can be bent to his will by throwing money at the problem.

    Which is odd, because these same sorts of jerks will angrily tell supporters for increased funding for public education or the welfare state that you “can’t spend your way out of problems.” Go figure.

  10. PaulBC says

    I’m not sure “con artist” is exactly right. A con artist will sell you something that doesn’t work or is completely worthless (deeds to non-existent property or unrecoverable swampland). Musk is selling something that may work enough to satisfy you but it is also a threat to yourself and others around you. Drug dealers fall in that category. It’s only a con if the drugs are fake. Gun manufacturers also sell a dangerous product, increasingly marketed as a toy or political fashion accessory, but measurably increasing overall societal risk of death by gun shot.

    Musk may also be a con artist in other respects. The “boring” (tunneling) projects seem utterly pointless. I am just wondering what’s the right term for a purveyor of products with no concern at all for the safety of buyers.

  11. snarkrates says

    Akira: ” And just who programmed those self-driving cars? The same fallible humans who you don’t think should be driving.”

    Except in the one case, humans are writing code, testing it, rewriting it, validating it, testing it in increasingly real environments and finally deploying it on a limited basis for further beta tests. In the other they are making split second decisions with screaming kids in the back seat and cigarette ash falling on their laps.

    Which do you think will work out better.

  12. stwriley says

    A lot of the hype around self-driving cars has happened because people like Musk can exploit the wide-spread lack of understanding about probability. We see the same thing in regards to people and Covid risk too. I often use this example to try and drive home to these hopeful innumerates the realities of what things like self-driving cars would have to do: Every day in the world there are approximately 45,000 commercial airline flights. If these flights had a 99.6% success rate, that would mean 180 commercial flights would crash every single day. Most people would be terrified to fly if they saw news of 180 airplane crashes a day. But this is exactly the problem with self-driving vehicles, they will have a failure rate that, while it looks small if you don’t understand probability, actually represents a lot of crashed cars and dead or injured people due to the immense number of cars and miles traveled by them every day.

  13. says

    If you want to amuse yourself with a technophile you can tell them a wee bitty lie and say that the people programming the auto-driver modules are mostly drawn from the guys who code game AIs. Then show them some footage of AI-driven vehicles in Halo, or AI-piloted ships trying to dock in Elite. They won’t sleep behind the wheel, anyhow, so you’re doing them some favor.

  14. PaulBC says

    On the dream of self-driving cars, this is tangential, but indulge me… (or scroll past!)

    I had a conversation with a co-worker a few years back about Uber and Lyft. He made a good point, though it should have been obvious, so maybe it’s obvious to everyone but me. Our company was paying for Lyft rides from the train to our office in San Francisco (stipulating that employees try to share rides as much as possible). It was about a mile distant, and I always walked, which was enjoyable enough and gave me much needed moderate exercise. I got a few nice photos too. So far I have never used Uber or Lyft. I have not had the need.

    My stance was that Uber and Lyft aren’t really “technology” at all, but just loophole around taxi regulations. The only “innovation” is replacing competent drivers with amateurs who don’t expect a living wage doing it and packaging it as a phone app. His point was that the real enabling technology is GIS. In the past, a cab driver had to know their way around a city, and that was the experience you were paying for. But with GIS, just about anyone can get to their destination following the map. Duh. I had missed the obvious. Basically taxi drivers are obsolete the way travel agents are. (But Uber’s only “value add” is the app, since they didn’t invent the GIS.)

    So anyway (I did have a point) the enduring appeal for self-driving cars has nothing to do with the consumer. Companies have replaced “professionally driven” with “driven by anybody.” Unfortunately “anybody” still wants to be paid, and since they can’t be offshored (thanks to speed of light limitations) there’s a compelling need to replace their pittance with total automation.

    To be clear, I actually favor massive, total automation. However, economy and culture needs to be ready for it first. The capitalist game is to automate when there is still the potential to charge the consumer for skilled labor while using unskilled/automated production. That in short is the appeal of self-driving cars. The luxury consumer add-on is just a way to get free beta-testing.

  15. Akira MacKenzie says

    I don’t doubt that Musk technophilic desires are sincere. He believes that he can and will do the things he sets out to do. It’s just his ability to deliver, his obnoxious arrogance, and his documented mistreatment of his workers that I object to.

  16. Akira MacKenzie says

    For the Leftist, automation is supposed to liberate us from pointless toil so we can have the time to do what we’d like to do rather than drudge away for someone else. The machines and the products they create are owned by all.

    For capitalist, automation is just a way to get out of paying people to work for you and reap more profits to salt away in your tax-free bank accounts. If that means to people you replace starve to death… Meh, so long losers!

  17. PaulBC says

    I’m not part of the Musk fan base (though I enjoyed Ashlee Vance’s bio). Even Teslas have gone in my mind from cool to annoying (especially when I’m waiting for three of them to go by so I can make a left turn; they’re not exactly a novelty anymore). (Disclosure: I drive a Prius.)

    But I still have to give the Tesla corporation credit for finally moving the needle on electric cars from “golf cart” to “luxury vehicle”. Whether it was Musk, whether it was advances in battery technology, whether it was an all-too-brief public fossil fuel scale, I still think this is a significant start. I waited years for anyone to challenge the dominance of internal combustion. It’s incredible what a scam was pulled with SUVs. You pay a tax for a guzzler if it’s shaped like a sedan, but you can pretend it’s a “truck” if it also blocks the view of drivers behind you in addition to guzzling gas. So basically gas cars are doing worse on average MPG than when I was a teen in the early 80s. My optimism is on the wane, but the fact that electric cars are taken seriously at all and desired by a large segment of consumers is a big deal.

  18. Walter Solomon says

    I was thinking about something similar when I was accompanying my mom to a casino one day. I thought, “why can’t they create a skill-based casino game?” Obviously, over time it will be less profitable (unless it proves to be popular) but even skill-based games come down to probability. Yes, if you’re a skilled player, you have a high chance of winning but it’s not a perfect certainty that you will win, it’s a never a 100% chance.

    This is true for driving as much as anything else. Even when humans take the wheel, it’s still probability involved. Not to mention human error because of lack of sleep or some other distraction. There’s so much that can go wrong and im reminded of the young trucker who was recently in the news for the long prison sentence he received because he accidentally killed people by making an apparent rookie trucking mistake. With that being said, I doubt self-driving cars are up to snuff yet.

  19. says

    Re: #1 and some of the follow up comments-

    I agree with #1. It’s a question of which is safer – electronic computers or meat computers. And for a whole bunch of reasons, including attentiveness, reaction time, and ability to monitor multiple data streams, electronic computers have the potential to do better once the systems have been developed enough. Airbus already has mandatory autopilot for landing the airplane when conditions get too bad, because the computer is better at it than people. Human pilots only get to do the landings in easy conditions. Granted, aviation is an easier environment because airports are more controlled, and there’s just less to hit up in the air, but it’s an early example where self-driving vehicles are better than people.

    It’s kind of like the old timers who insisted that seat belts did more harm than good, pointing out the very rare and isolated scenarios where it would be better to not have a seat belt, ignoring the far more common scenarios where the seat belt made you safer. The average fatality rate right now in the U.S. is 1.11 deaths per 100 million miles traveled (https://www.iihs.org/topics/fatality-statistics/detail/state-by-state ). That’s the number self-driving cars need to beat before they’re ready for widespread use, not 0.0 deaths per 100 million miles.

    That said, I agree Tesla is going about it the wrong way. I had a business lunch a couple years ago with one of the head automation guys from one of the big car companies. He had a whole bunch of choice words about Tesla’s approach to self-driving cars, and the types of accidents they were going to lead to, and how much the blow back would hurt the rest of the industry who was going about it the right way. It’s even worse if Tesla’s been warned by the NTSB.

  20. Walter Solomon says

    Oh, and Musk’s Las Vegas “hyperloop” looks like a shit tube. Compare that to recently opened Taihu Tunnel in China and it’s no surprise we seem to be lagging behind the Chinese.

  21. PaulBC says

    Walter Solomon@19

    can’t they create a skill-based casino game

    Isn’t that called the “stock market”?

    Oh wait, “skill-based.” Nvm.

  22. flex says

    As someone who works at an automotive supplier actively working on autonomous driving, maybe I can add some context.

    The SAE defines 5 levels of vehicle autonomy, well 6 if you count the 0.
    0 – No Automation. Full Manual Control. Nothing in a vehicle which assists the driver in steering, accelerating, or braking. Very few vehicles are at this level today.
    1- Driver Assistance. A single system which is automated, like cruise control. Most of today’s vehicles are at this level.
    2 – Partial Automation – The vehicle can control some steering and acceleration/braking. There are quite a few vehicles on the market today with these features. This includes lane detection/correction and immanent collision detection. In the first case the vehicle, if it detects the lane markings, will steer itself to remain within the lane. The second will apply brakes to prevent the vehicle from getting too close to a vehicle in front (reduces the possibility of tailgating). For all it’s hype, this is where Tesla sits today. There are also vehicles from Ford, Stellantis, GM, BMW, Audi, Mercedes, Geely, etc. with these features.
    3 – Conditional Automation – This is the first level where a vehicle could drive itself, but it still needs human supervision. At this level the vehicle should be able to assess environmental factors and react appropriately. For example, changing lanes to pass a slow moving vehicle. Some vehicles are looking to get here within the next 2-10 years. There are some OEMs who think it will be sooner. Tesla suggests in their marketing that they are already there, but they are careful to not explicitly make the claim.
    4- High Driving Automation – At this level the vehicle must react appropriately when something goes wrong. Before this level, if something unexpected happens or fails, a human is needed to correct the vehicle. At the moment there are various pilot projects using low-speed vehicles (usually less than 30MPH) in limited ranges, mainly for taxis or delivery services in localized urban areas.
    5 – Full Automation – The vehicle can handle all driving tasks on it’s own.

    What is interesting to me is that the systems being developed for these autonomous vehicles use machine learning rather than rely on human programmers. I’ve seen the learning setups where the software is literally watching thousands of hours of video, of driving on roads of various conditions, in order to learn what conditions could occur. Then there are other videos used to test the software. Videos which the software has never seen before which will have children jumping into a road, or a ball bouncing into the road in front of the vehicle while children follow. The software is supposed to have learned to stop the vehicle in both cases. Of course, even with tens-of-thousands of hours of machine learning, it can’t cover all situations. There will be times when the sensors just can’t detect what’s going on reliably. Mind you, human eyes have similar problems.

    How far are we from level 5 automation? I can’t really tell you, but I can say that the major OEMs are no longer thinking they will get there anytime soon. Tesla is only at level 2, regardless of their hype, and even getting to level 3 will be a pretty big leap. The level 4 stuff which is going on is interesting, but will probably be limited to low-speed, urban environments. Which means it may save the taxi companies some money by putting drivers out of work. But it may also increase congestion because a company could put more cabs on the streets for less money, or expand into towns with limited taxi service today. Higher speed level 4 autonomy is probably going to be some ways away, simply for safety reasons. But it may arrive faster than I think. There is often a tipping point in technology where improvements start happening faster because so much of the foundation is complete. We are not there yet in autonomous driving capability, and it’s difficult to predict when that will happen.

    When we measure the performance of autonomous driving, or driver assist systems, it should be against current human drivers. Otherwise we are letting the perfect defeat the good. This is where things get a little hairy, but not for the reasons most people think. We can measure the performance of driver assist (and there are systems which will wake up inattentive drivers) against human driver performance. That’s not a problem. Right now, humans have the edge. I see no reason to think that humans will always be superior than an AI in driving.

    The problem is liability. Right now, if a human driver gets into an accident, we blame the human driver. But when the vehicle drives itself, who is at fault for an accident? The operator/owner of the vehicle? The company which sold the vehicle? The supplier who developed the software? Right now the feeling is that if a vehicle which claims autonomous driving capabilities, which a lot of Tesla owners think is true, has an accident it’s the fault of the company which built the vehicle. There is going to be a lot of finger-pointing until the legal ramifications are figured out. This is why Tesla blames the operators, to prevent getting blamed themselves.

    I would like to take a step back though, and ask why assigning blame is necessary? If we assign blame to correct a problem, that’s probably a good reason. But in our uberCapitalist society, we assign blame in order to figure out who pays. I.e., who pays for the repairs, who pays for the medical bills, who pays for lost wages, etc.

    Wouldn’t it be a better society if medical bills were covered by universal health care? If government insurance covered lost wages? And if repairs are paid for by private insurance? Accidents will still happen, they will always happen. But our response to an accident shouldn’t be to lawyer up. Our response should be to root-cause, seek corrective action, and prevention.

  23. PaulBC says

    @22

    Oh, and Musk’s Las Vegas “hyperloop” looks like a shit tube.

    Could it be repurposed as a sewage processing system? Like the Roman’s said, “Pecunia non olet.”

  24. PaulBC says

    Other comments have made the same point, but I think the issue with automation is not the actual probability of human error vs. automation failure but how to assign liability. Obviously self-driving car manufacturers would like it written into law that they are not liable (like the software business has somehow accomplished and probably why we just grit our teeth and complain when broken software causes actual damage like lost documents).

    The other issue (again brought up above) is the correlation between probabilities, e.g. a single low-probability failure causes many simultaneous crashes.

    Apart from that, I think humans nearly always overestimate their own reliability and in many cases, automation will improve safety. (That is not, however, the factor motivating the people who are selling this.)

  25. Artor says

    I’ve been driving for 35 years, and only once have I ever collided with another vehicle. (They were at fault.) My father logged well over a million miles on the road, and only once to my knowledge did he ever collide with anything. (He hit a concrete bollard while backing an extra-long van.) I doubt the Tesla autopilots are up to that level of reliability yet.

  26. lanir says

    I sometimes automate tasks on computers but it’s all done inside their digital environment. Even there, where things are ultimately fairly predictable and you have ways to eliminate fuzzy gray areas, it’s easy to make mistakes while programming complex tasks. And have unintended behavior in edge cases

    The real world is analog and that means it’s not always predictable because we don’t have all the info. There are also immensely more gray areas and edge cases. Computers do math like a pretty smart person but to get them to do decision making even on the level of a very young child takes a lot of work and is difficult. I think we’re decades away from anything that could be considered a self-driving vehicle on unmodified roads. It’s probably going to be far less work to build special highways for self-driving vehicles and have people take over on all other roads. Even in America where we have a LOT of highways, that’s still likely to be the only practical solution until we get some really big advances in software and hardware.

  27. Walter Solomon says

    PaulBC @26

    Could it be repurposed as a sewage processing system? Like the Roman’s said, “Pecunia non olet.”

    I wouldn’t be surprised if that was one of the selling points Musk made in order for the Las Vegas city powers to agree to the project. Later we’ll find it’ll only receive waste from Tesla toilets.

  28. Jean says

    I’ll be impressed when I see an autonomous car able to drive here in winter with snow covered streets during a winter storm. Mind you, people are also not that great under those circumstances (understatement) but we do manage to have almost normal rush hour traffic (in terms of accidents if not speed).

    Also, one issue is the mix of computers and humans sharing the road. I drive defensively and expect stupidity from other drivers and I’ve actually avoided accidents that way multiple times. But if the other driver is a computer that does something no human would ever do, it becomes a lot harder to rely on which type of stupid thing can be expected to avoid accidents. And I assume the reverse could also apply.

  29. michaellatiolais says

    There are a lot of us with perfect or almost perfect driving records. And there are a LOT of us without that. I could fill a few pages with the stories of reckless driving(DUIs, inattentive driving, etc) just from people I know.
    Musk sucks, but I am all for phasing in driver assistance, especially in places like 680 in the Bay area. Enforce lane assist, following distance and assistance changing lanes and you would clean up a ton of problems on route like that. Add in some smart road technology(“lane 4 has an object on the ground in mile 123”) and you could save a lot of lives. I might actually like driving again if I didn’t have to share the road with people who insist that they are fine to drive 90+ half awake while talking on the phone.

  30. says

    @20 This is an aside, but because of Canada’s gambling laws, carnival games have to technically be games of skill. The park still makes money off them by carefully calculating the winnings to skilled players vs typical winnings and ensuring that the prizes are bought at wholesale prices low enough that they still give away less than they earn.

    Well, that and the games are specifically designed to trick people into playing them wrong. If you see something that looks like a bulls-eye in a carnival game, aiming at it will probably give you the smallest chance of success.

  31. says

    garnetstar@8 After 911 I saw someone suggest there should be some way to take over aircraft by remote control if they’re hijacked. The person in question never stopped to consider that terrorists or other baddies could get access to such systems and take over airliners.

    PaulBC@16 I’ve read claims that Uber intends to eventually replace its drivers with self driving cars. Which is actually kind of dumb, because under the current system Uber’s drivers assume the costs of maintenance and the aging of their vehicles. Of course last I heard Uber was losing money, so they probably won’t be around to take advantage of self driving cars even if they appear relatively soon.

    Walter Solomon@20 the game of skill at casinos that offer it is poker.

    lanir@29 I agree that autonomous vehicles on unmodified roads are a long way off. I recently heard a researcher at Canada’s University of Waterloo state that as well. Waterloo actually has a self driving bus in operation, but it runs on a very limited route. He noted that getting self driving vehicles to work in conditions like Canadian winters still needs a lot more research. After all things like road markings become hard to see, or see at all. It was also suggested in the report that self driving trucks might actually be used on highways, because there are fewer issues to deal with. But I suspect the attempts at that will grind to a halt the first time one malfunctions and causes an accident with lots of casualties.

    As for modifying roads for self driving vehicles a potential problem I see with that is the various makers not agreeing on standards. Part of the reason digital radio via ground based transmitters hasn’t taken off in North America is because the regulators were unwilling to pick a single system from competing ones, and announce a date when stations would have to convert to the new format.

  32. says

    it’s no surprise we seem to be lagging behind the Chinese.

    It’s like free markets aren’t as efficient as standardization and planning or something.

  33. numerobis says

    Akira MacKenzie@7: humans engineer systems that work better than humans all the time, so it’s a pretty silly criticism.

  34. unclefrogy says

    since I have had a lot more time on my hands lately I started watching dash cam videos on the tube to distract me from more distressing things. It has greatly effected my driving experience. brought me back to the realization that hurtling around at speed in a very heavy vehicle with other people like wise situated is fucking dangerous and just a little bit scary.
    In thinking about self driving vehicles why are we just recreating the same experience of individual drivers independently driving?
    If all the machines could communicate their driving status with all other vehicles in their location thinking of the highway and roads as one big machine instead of a bunch of separate machines running around in the environment. then they could all act in a coordinated manner. In the same way that a railroad is not just a loco and cars but the railroad is a huge coordinated machine made up of locos and cars and track and switches and crossings and all of the parts operating in a coordinated manner
    I suspect that there is more going on here then just a simple transportation problem. we have the human emotional component to driving and cars as well as the economic motivation manufactures and sellers of things like computers software and cars and roads

  35. lotharloo says

    It is very important for the public and the politicians to have some technical knowledge of the technologies involves because unfortunately there is a lot of misinformation as well as misunderstanding and also because the public needs to make decisions in terms of which policies to implement. Let me give you some examples of some possible directions.

    We have been using software and programs for a long time, even in highly critical area where a programming mistake can causes large number of deaths (e.g., aviation software). How do those programs work and how do we ensure that they don’t make mistakes? The public (through politicians and regulating agencies) have decided that those systems need to very safe. There are many ideas within computer science to make a system safe but the strongest form of assurance comes from areas such as “Formal Verification” where you model the program/software in question mathematically and then proceed to mathematically prove that the program is error-free. There are also areas of engineering where the goal is to design fault-tolerant systems that can survive a number of failure events (and the concept has also been studied in computer science as well). So from a policy point of view, collectively we have decided to make sure that systems are engineering to tolerate faults and the programs involved must be well-understood, well-tested and even mathematically proven to be correct.

    Unfortunately, the “self-driving” cars move in the opposite direction. The major hype right now essentially involves reinforcement deep learning which results in programs that not only nobody knows why they work, they cannot be explained as to why they work. Notice that this is not how humans work or drive; humans can explain why they reacted in a certain way in an unusual situation, these programs cannot. The best their engineers can say is that “well, the weights stored in the matrices made them do it!” So forget formal verification, forget investigating the components of the system to make sure that they are bug free. We are creating impossible to explain and and impossible to verify software programs to put them in charge of driving cars. Not only that, but the nature of AI involved is such that they are very sensitive to how they are trained and they can require a high level of “engineering” (read: fiddling around with parameters and set up) to get them working properly. The systems are obviously being tested a lot but the philosophical and political question for the public is do you want tested but inexplainable and unverifiable software programs in charge of cars and trucks on the road or not?

  36. PaulBC says

    lotharloo@39

    Notice that this is not how humans work or drive; humans can explain why they reacted in a certain way in an unusual situation, these programs cannot.

    I don’t entirely agree. A human may get as far as “I saw a potential hazard and took action.” but it’s very unlikely they assessed the exact nature of the hazard (was that dark spot a hole, an oil slick, or a piece of a truck tire?) or gave much thought to the precise action they took. A lot of any activity is intuitive and some is “muscle memory”, not literally in muscles but definitely at a level where you understand and can explain the exact set of actions you took to parallel park a car. You can try, but mostly, you just practice till you’re good at it.

    I agree, though, that we should have better understanding of automated systems. That doesn’t mean that we can’t use machine learning, just that it shouldn’t be so “deep” to have no comprehensible model. Maybe these systems should be studied in simulator environments so that their actions are understood in context.

  37. James Fehlinger says

    I’m not sure “con artist” is exactly right. . .
    I am just wondering what’s the right term for a purveyor of
    products with no concern at all for the safety of buyers.

    Well, here’s another candidate label:


    ++++
    Elon Musk doesn’t have Asperger’s, he has sociopathy
    May 10, 2021

    ProgressumTV

    During his SNL opening monologue, Elon Musk confessed he has Asperger’s.
    Although one should not be in the habit of doubting people who claim
    to have mental health issues or psychological conditions, in Elon Musk’s
    case, there is little to suggest he suffers from a lack of
    cognitive empathy which is one of the main characteristics of people
    on the Autistic spectrum, if any thing, his behavior as Twitter provocateur
    and troll suggests he enjoys provoking reactions from others while his
    anti-union and anti-worker attitudes suggests a lack of emotional empathy
    instead. In conclusion: Elon Musk doesn’t have Asperger’s, he has sociopathy,
    and his covering up for his asshole attitudes is actually a disservice
    for the de-stigmatization of psychological and developmental conditions.
    ++++

    YMMV, of course. ;->

  38. tuatara says

    @38 unclefroggy

    I too find hurtling about at high velocity (relative to the natural locomotive abilities of the human body) quite terrifying.

    On the subject of interconnectivity of devices so that they operate as if a single machine, I attended a Microsoft Ignite conference about 7 years ago. One presentation focused on IoT protocols and in particular the open standards then being developed to facilitate this interconnectivity. Special focus was being brought to self-driving cars. The issue then was the speed of communications available, and availability of high-speed communications particularly in reference to remote sections of highway with poor cellular coverage. Perhaps modifications to the roads themselves will be part of the solution in facilitating self-driving vehicles.

    There are a lot of new vehicles now on the market implementing (or partially implementing) level 2 vehicle autonomy with features such as lane assist and adaptive cruise control. But the human driver is still in control. It remains to be seen how we progress up the levels of human redundancy. I would like to see individual vehicle ownership made redundant also. Public sharing of vehicles is perhaps a more environmentally appropriate approach, where you have an app with which you book a collection by a driverless car, perhaps one owned by a benevolent agency (or is that too much of an unlikely “Logan’s Run” type fantasy?)

    Slightly related, while road transport is terrifying enough, at least it is only in two dimensions, on the ground. The current developments of flying cars truly send a shiver up my spine!

  39. Peter Bollwerk says

    I own a Tesla and I pretty much agree 100%. I love the car (for the most part), but I don’t care for Elon and his propaganda. I also refuse to pay $10,000 for the privilege of being a beta tester for Full Self Driving.

  40. microraptor says

    And as far as the programming for the computers responsible for driving the cars goes, you want to look at the way companies like EA, Activision, or CD Projekt Red are releasing crappy, bug-ridden messes of games that barely work or in some cases don’t work at all? Tell me, do you really believe that a company that’s run by someone like Elon Musk is going to be better and more thorough at beta testing their software than that? I’ll pass on having a software update to my car that gives it the pathfinding AI of an NPC in Cyberpunk 2077.

  41. notaandomposter says

    @41- you beat me to it; however, I’ll posit it’s possible to be both a sociopath and on that spectrum.
    and he’s a conman
    self driving cars (as belng sold by Tesla) is vaporware
    where is the tesla semi that was promised years ago?
    where is the cybertruck?
    where is the hyperloop?
    he lied at his presentation of solar roof systems
    starlink is a joke
    he lied about the boring tunnel (vegas scam)

    thunderf00t has done well researched debunking of all things Elon- an entertaining watch

    the con is Tesla stock- it’s value is based on vapor (Tesla cars are not profitable- the only thing that keeps that business afloat are selling carbon offsets) every other business is a money pit

  42. Nemo says

    Does it sound like a good combination?

    This is the wrong question. The right question is: Does it work? Not “does it sound good”, not “is it perfect, does no one ever die”; but simply: Per person-mile travelled, which does better — human driver, or AI?

    At least one study says, AI:

    The researchers concluded that the national crash rate of 4.2 accidents per million miles is higher than the crash rate for self-driving cars, which is 3.2 crashes per million miles.

    https://www.fastcompany.com/3055356/the-first-study-of-self-driving-car-crash-rates-suggests-they-are-safer

    Other search results give different answers, so certainly take it with a grain of salt. And it’s early yet. But the idea that AI has at least the potential to be a better driver than the average human (a pretty low bar) should not be lightly dismissed.

  43. says

    I’ll pass on having a software update to my car that gives it the pathfinding AI of an NPC in Cyberpunk 2077.

    The way you said it makes it sound kinda fun…

  44. Nemo says

    @Raging Bee #47:

    Well, they haven’t learned how to walk yet, so they might as well drive… :-/

    Who haven’t learned to walk? Robots? Because if that’s what you mean, I wanna say — have you never seen a Boston Dynamics video?


    https://www.youtube.com/watch?v=tF4DML7FIWk

    Yes, this was a hard problem, for a long time. Was.

    Even more striking to me, they (not BD, but… robots) finally, finally achieved something like realistic face movements — this, just introduced at CES:

    (The lip sync is still terrible, though.)

  45. ffakr says

    I personally don’t think we’re anywhere near full autonomous driving.
    I think it could be fairly easy to do.. if we didn’t expect it to drive on today’s roads.

    Part of the problem is getting the computer to recognize the parameters of our current roads. Heck, far too often, I can’t see where the lines are on local roads in clear weather.. let alone in the rain or snow.
    If we had roads designed for autonomous travel, things would be much easier. Through easily readable optical or electronic road markers, autonomous systems would do a much better job.
    Another problem is dealing with humans. They’re always going to be the wild card that’s going to be the most difficult to deal with.

    If we do see fully autonomous driving in my lifetime, I expect it will be only in limited circumstances.. but I’m OK with that.
    I imagine the inside lanes of the highway I usually drive in and out to work on (when I actually drive into work) converted to EV express lanes. With nothing but EVs and lanes built to maximize the vehicle’s ability to track it’s position, it should be fairly easy to pack cars in, nearly bumper to bumper at speed.
    The main trick here would be controlling access to these lanes. If we can’t keep humans from driving the wrong way down highways.. we’re going to have to mechanically keep them from driving into the autonomous lanes.

    As a human.. I’m totally OK with this limitation. I don’t know that I’d ever get fully comfortable with having my car drive me through a city.. especially when there’s crazy humans driving & biking & walking around me too.
    It would be wonderful to be able to chill during the extended drive in or out to/from work though.

    Oh, one last thing.. it seems like it’d be a huge advantage if we could get auto makers to agree on a protocol for inter-car communications. Seems to me like it’d make automated cooperative sharing of a road much easier and safer if every car could get continuous updates from all other cars around it.
    Why should the 4rd car behind me have to visually figure out my vehicle is rapidly slowing, or even out of control because of a catastrophic failure (like a blow out) by watching how the car in front of it reacts (by watching the car in front of it.. by watching the car in front of it).. when that information could be electronically transferred back the entire chain of traffic in a split second?

    I’m not super confident that this type of cooperation would happen any time soon. Perhaps, if we had a well-functioning Federal Government.. this is the type of tech for our national infrastructure that could come out of DARPA or some other appropriately capable federal program.

  46. ffakr says

    Oh. and a P.S.
    Yea.. there isn’t much to like about Musk.
    He’s got a definite grifter vibe about him and it seems fairly likely to me that his seemingly quirky antics have frequently been used to cover up potentially illegal actives.. like securities fraud and insider trading.
    E.g. his recent stunt of posting a twitter poll that asked his followers whether he should sell a lot of Tesla stock just happened to occur just before the public recall of over 100K Tesla cars. It was also close enough to that recall that Musk should have known the recall was coming when the Poll was posted. Musk also used the recent stock sell-off to brag about how much capital gains tax he was paying.. an obvious PR ploy to address recent stores about him and other Billionaires paying little to no income taxes.

    Also,..
    When his name comes up, I’m frequently reminded about what a baby he was with the guys on TopGear tried to review the original Tesla sports car. The review couldn’t have gone much worse with.. from what I recall.. multiple failures on more than one demo vehicle.. and lots of down-time with the presenters waiting around for the batteries to recharge and/or cool down and get out of thermal protection.
    Musk apparently threw an absolute fit. Going from memory again.. but I believe they said he threatened to sue the show. They also made a running joke out of tip-toeing around any mentions of Tesla in future episodes, lest Elon come after them again.

  47. Walter Solomon says

    microraptor @44

    quite a bit of wrecks that are blamed on human error are actually covering dangerous flaws in the way modern cars and roads are being built.

    Not to pedantic, but wouldn’t that still be “human error”? Obviously, it wouldn’t be an error on the part of the motorist but on the humans who designed the roads and the cars. Perhaps a computer could do all of these things better than people.

  48. flange says

    @50 Nemo
    To my eye, the Boston Dynamics video looks not like robots dancing, but CGI of robots dancing.

  49. chrislawson says

    snarkrates@1–

    “No, that does not mean that in this one area that he is making things worse.” Actually, Musk is making things worse in this area.

    Releasing cars with L2 functionality is dangerous (and we know this from reams of research in many fields — L2 automation leads to far more accidents than L0 or L1; it is the safety abyss of automation). If US regulatory bodies hadn’t been corrupted by industry lobbying and pathetic irresponsibility from both major political parties, it would be illegal to build L2 cars for anything other than research and development.

    But what elevates Tesla to the level of sociopathic fuckery is marketing these functions as “Autopilot” and “Full Self-Driving”, terms that ought to be reserved by law for fully L5 vehicles.

  50. Ridana says

    self driving trucks might actually be used on highways

    Whenever the subject of self-driving vehicles comes up, all I can think about is that scene in Logan where the horses escape from the trailer and the autonomous semis are whizzing by. Creeps me right the fuck out.

  51. davidc1 says

    In other news BMW are testing a self riding Motorbike,I kid you not.
    What happens when it comes to a stop I hear someone ask,well
    training type wheels dropdown on either side.

  52. Edward Bosnar says

    Sorry, had to delurk just because this caught my eye in garnetstar’s comment (@8):
    “Neal Stephenson, a pretty prescient guy, had this in one of his books too (The Three Californias).”
    That would be Kim Stanley Robinson, no? His trilogy (Wild Shore, Gold Coast and Pacific Edge) is often referred to as The Three Californias.

  53. tacitus says

    Well I’m hoping for self-driving cars to be a reality by the time I’m ready to give up driving (20-25 years, with luck).

    Around 38,000 Americas die on the roads each year, and in the vast majority of incidents, human error is to blame. Self-driving vehicles do not have to be perfect to save tens of thousands of lives a year. You could still have 10,000 deaths a year caused by faulty software or hardware and still be much safer on the roads.

    But even if the technological challenge is overcome, there’s the psychological challenge of switching to autonomous vehicles which will likely slow down adoption considerably and require a safety standard far in excess of that required for human operators.

    As others have said, autonomous vehicles should be able talk to each other to pinpoint their relative distance from each other and to communicate immediate road and traffic conditions. Ideally, they should all be able to hook into a citywide grid of sensors that can coordinate vehicle traffic and considerably improve travel times, even with a higher density of autonomous vehicles.

    Incentives could be introduced to encourage the move the autonomous vehicles. Convert HOV and toll lanes to self-driving only, and prioritize traffic signals to allow virtual trains of autonomous vehicles through with the minimum of wait times.

    The transition will always be tricky, but with those incentives and the appeal of a shorter safer commute while working (or relaxing) instead of driving, it won’t be as an impossible task as it seems. If you ring fence the major urban areas, freeways, and other major arterial routes, you’ve solved 90% of the problems without having to tackle the majority of issues that rural travel can throw at you.

    Finally, to repeat. None of this needs to be completely bug free to greatly enhance road safety and convenience, but likening the risk to the release of a buggy open world game is a little ridiculous. There are already thousands of examples of critical systems that are safely managed and controlled by software because the quality control process is much more rigorous since the stakes are that much higher.

  54. flex says

    ffakr #51 wrote,

    … it seems like it’d be a huge advantage if we could get auto makers to agree on a protocol for inter-car communications. Seems to me like it’d make automated cooperative sharing of a road much easier and safer if every car could get continuous updates from all other cars around it.

    You are not alone in thinking that, but the problem is bigger than most people think it is. The problem is not really a protocol issue, there are a number of communication protocols which have become standard across the automotive industry for in-vehicle communication. There actually several difficulties to overcome, completely unrelated to a communication protocol.

    First, this wouldn’t be a US-only design, vehicles made in the US are sold all over the world, and vehicles made all over the world are sold in the US. The government of each country regulates their RF spectrum usage, so every country which imports vehicles would either have to agree on a common RF band to use, or the manufacturers would need to configure each car to select a specific RF band depending on the country it’s in. Which is not trivial, as using different RF frequencies may require different antennas. The RF band is already pretty much already allocated for uses, allocating a specific band for vehicle-to-vehicle communication would mean closing that band for other uses. This doesn’t mean it can’t happen, but depending on the governments involved it can take a long time. Your Tire Pressure Monitoring systems use two frequencies world-wide, 315Mhz and 434Mhz, and getting to only two frequencies took some time and a lot of work. I worked for a few years managing the government approvals from various countries around the world for using TPMS frequencies, and it’s not a trivial task.

    Second, the TPMS systems I worked on used broadcast power in the mW range, and short bursts of data packets once a minute. This was done to conserve battery life and reduce interference with other vehicle systems (and to meet regulatory requirements from the 30+ countries our products were used in). A vehicle-to-vehicle system is likely to need power levels in the hundreds of mW or possibly even over a watt. This will create side-band noise, which is likely to have knockoff effects on other electronics, include other automotive systems as well as cell phones and in-vehicle entertainment systems. When I first started working on EMC, 20+ years ago, one of the OEM engineers was transporting us to a test location, and said, “Let me show you what we are trying to stop.” He tuned the radio to an unused channel and turned on the wipers. Every time the wiper motor turned on, the radio gave a loud buzz. Since then, during development work, I’ve seen radio backlighting flash on/off because of signals from panel dimmers, and the craziest one I’ve seen was a cell phone ringing trigger the remote vehicle unlock. All these issues were fixed before the vehicles go into production, but putting a hundred milli-watt transmitter on every car may cause more problems then you think.

    Then, and probably the most crucial reason, is that putting communications on vehicles gives only a marginal benefit until all vehicles are communicating. There are a lot of older vehicles on the road, if one of the four vehicles in your example is an older vehicle without communication, if the first car stops suddenly you will still need to rely on human recognition of that fact to avoid a crash. The fact that 75% of the vehicles are in communication will not help. This problem could be dealt with too, freeways could be designated to be used by vehicles which have communication, much like freeways don’t allow tractors or some other farm vehicles today. But that would take political will, and it’s not going to happen until some threshold is met of the number of vehicles which are communicating are on the road already.

    Finally, while the vehicle-to-vehicle communication does assist drivers, it is not autonomous. Much of the US has roads where vehicle-to-vehicle communication would be little, or no, use because there are too few vehicles. Certainly, in places with congestion, the system could be set up for everyone to travel at, say, 50MPH and a fixed distance, say 15 feet apart. But when driving on a road where you are the only car for a mile, with a system like that the driver has to be completely in control.

    Please understand that I’m not saying your idea is bad, but that the automotive companies have looked at it and feel that better results with less investment would be possible by working on improving vehicle autonomy rather than trying to create an interconnected system which will work only in limited areas.

  55. flex says

    chrislawson @55 wrote,

    Releasing cars with L2 functionality is dangerous …. L2 automation leads to far more accidents than L0 or L1; it is the safety abyss of automation.

    I agree with one caveat, L3 is going to be the dangerous point. We have plenty of cars on the road now with L2 automation (Lane detection, braking assist) for a number of years and we have not seen an increase of accidents. I feel that’s because L2 still requires drivers to pay attention to the road. L3 is the point where drivers will start thinking their car can manage itself in most situations, and we are going to learn a great deal about human behavior at that point.

    BTW, does anyone else remember how the introduction of the cruise control function was seen as decreasing the safety of driving? We’ve adapted to that, and the cruise control function has also changed a bit over the years to be somewhat safer. I don’t view the lane assist or braking assist much differently than cruise control. Even the automatic parallel parking seems pretty benign. At L3, when vehicles will start doing things like recognizing and reacting to stoplights and stop signs on their own, that’s when I think we’ll see some very interesting problems crop up (and we all know that human beings never run stoplights or ignore stop signs).

  56. tacitus says

    Certainly, in places with congestion, the system could be set up for everyone to travel at, say, 50MPH and a fixed distance, say 15 feet apart. But when driving on a road where you are the only car for a mile, with a system like that the driver has to be completely in control.

    That’s why I think it’s not just the cars that need to be communicating with each other — the city grid has to be the solution too, and can even provide routing information and modify the timing of traffic signals to speed the flow of autonomous traffic.

    Yes, it’s all hideously complex, and will likely take a lot of political will to establish the required standards and frameworks, but if it’s going to happen and the US doesn’t do it first, another country will, and it will be left up to the US to figure out how to catch up.

  57. PaulBC says

    I want to say one more thing about Elon Musk. I don’t hate the guy and I think that he actually did some good as an entrepreneur at one point, specifically by getting consumers to take electric cars seriously. You can debate his contribution all you like, and whether it might happened anyway with cheaper batteries or interest by big automakers, but it’s a step forward and it was a long time happening.

    What annoys me about Musk is less the predatory capitalism, which kind of goes with the turf, but just that he thinks he’s a lot more funny and interesting than he is. PZ’s photo illustrates this. He just gives stupid interviews, tweets a lot, and makes an ass of himself all the time.

    His “$420 per share” tweet for instance: “420” Heh heh You’re cool Elon. As soon as it happened, it was obvious he was making a 4:20 joke. But the business media couldn’t report it this way, because equity prices are serious and executive talking about them publicly is serious. There are jobs at stake. There are retirement funds at stake. It’s as funny as pulling a fire alarm. What an asshole! Just because you’re a billionaire doesn’t mean you get to shit on everyone else. And it doesn’t make you witty or colorful. It makes you a public menace. He still doesn’t get it though.

    Look, I’m not a billionaire and I don’t think any amount of luck could have turned me into one (I’m doing fine, thanks.). So Elon Musk is good at something. Making lots of money. It doesn’t make him better than other people. It doesn’t make him funny or interesting. He can launch his fucking rockets. I just wish he’d shut the fuck up.

  58. answersingenitals says

    The government’s critical role in the advancement of autonomous cars is in the development of nationwide, and in cooperation with other countries, worldwide standards for the design and operation of such vehicles and roads. But, in August, 2017 Trump disbanded the US Government’s Self-Driving Car Council! As I recall, when asked why he did this, Trump answered: “Why doesn’t everyone just get a chauffeur?”.

  59. Shawn Smith says

    Walter Solomon@20:

    Well, video poker, live blackjack, live poker (against other players) and sports betting can definitely be classified as skill based. I’ve even seen some paytables for video poker that give a positive return (usually one of the deuces wild variations and then never better than about 101.2% with a fairly high standard deviation).

    Also, just to be pedantic myself, it would have been the Clark county commission, not the powers of the city of Las Vegas that had anything to do with Musk’s so-called hyperloop. The city of Las Vegas does not extend south of Sahara Ave. at any point. You were probably trying for comprehension for the general public, and that’s fine. No argument from me.

  60. ffakr says

    @60 Flex,
    I’m not even thinking about the electronic transport, but more fundamentally just agreeing on a schema for organizing the driver data and figuring out how to package that, respond to requests.. do we broadcast?.. It’s clearly doable (look at the universal test port on cars).. but everyone would have to get together to do it.. or it would have to be mandated.

    I’m not sure that a US-only version would be such a big deal though.
    US and other Nation’s AutoMakers have been navigating different requirements in different nations since the invention of automobiles. Right-hand v. Left-hand driving is the most obvious example but safety requirements and local emissions requirements also come to mind. Heck, these things aren’t even Universal across the US.. with California having their own standards for some of this. I’d expect that they’d find commonalities where they could, encourage collaboration where they could, and they’d roll out regional versions where they had to.

    In practice, I expect that we’d run into situations where early adopters w/ large auto markets would set standards and smaller markets would face the situation where they could adopt what another nation was doing or they wouldn’t be able to get autonomous cars with as much functionality as major markets.
    This would probably work out much like how former British Colonies ended up driving on the same side of the road as Britain.

  61. Nemo says

    @flange #54:

    To my eye, the Boston Dynamics video looks not like robots dancing, but CGI of robots dancing.

    BTW, I posted two BD videos, but the blog only embedded the first one — the second one is there as a link; it’s the Parkour video, which if anything is more impressive than the dancing.

    If you’re still determined to disbelieve this video, there’s not much I can say, except: Spot, the yellow “dog” robot, is available now as an (expensive!) commercial product. So there are lots of third-party videos of it, even if you don’t trust BD. Spot’s not quite as impressive as Atlas (the humanoid robot in the BD videos), but I defy anyone to watch Spot move, and resist the impression that it’s a living thing.

    The Parkour video shows Atlas blowing past “humanlike”, straight onto “more agile than most humans”, which is kind of inevitable with these things. If I were the sort of person to be scared of robots, it could be nightmare fuel. Then again, I’ve felt that way since I first saw a BD robot galloping (!), in 2013: