How safe are self-driving cars?


I for one would really like to see self-driving cars become an everyday reality, as common as cars are now. It may surprise people that many such cars are already widely used in several cities as taxis. But there are key questions concerning safety and one would hope that the companies marketing these cars would be transparent about the ability of their cars to detect pedestrians and obstacles. But Sam Biddle writes that one major company is putting its cars out on the streets even though it seems to have two key vulnerabilities: an inability to see small children and large holes in the ground.

IN PHOENIX, AUSTIN, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads. Cruise’s app-hailed robot rides create a detailed picture of their surroundings through a combination of sophisticated sensors, and navigate through roadways and around obstacles with machine learning software intended to detect and avoid hazards.

AV companies hope these driverless vehicles will replace not just Uber, but also human driving as we know it. The underlying technology, however, is still half-baked and error-prone, giving rise to widespread criticisms that companies like Cruise are essentially running beta tests on public streets.

The concerns over Cruise cars came to a head this month. On October 17, the National Highway Traffic Safety Administration announced it was investigating Cruise’s nearly 600-vehicle fleet because of risks posed to other cars and pedestrians. A week later, in San Francisco, where driverless Cruise cars have shuttled passengers since 2021, the California Department of Motor Vehicles announced it was suspending the company’s driverless operations. Following a string of highly public malfunctions and accidents, the immediate cause of the order, the DMV said, was that Cruise withheld footage from a recent incident in which one of its vehicles hit a pedestrian, dragging her 20 feet down the road.

Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them. Yet, until it came under fire this month, Cruise kept its fleet of driverless taxis active, maintaining its regular reassurances of superhuman safety.

This is just one company. There are plenty of other companies out there operating autonomous vehicles with varying degrees of success. The top ones are Waymo (Google), Cruise, Zoox (Amazon), Argo (Ford and Volkswagen), Aurora (formerly owned by Uber), Motional (Hyundai), and Poni.

When it comes to safety, the key question is safer than what? Accidents involving autonomous vehicles get a lot of publicity, especially if it involves fatalities. The promoters of autonomous vehicles claim that the standard should not be zero accidents but that the cars should be safer on average than human drivers.

We all have had experiences with bad drivers, even if we persuade ourselves that we are very good drivers. People who drive recklessly, exceed speed limits and break other rules, and even drive drunk are sadly common. Autonomous vehicles will do none of those things so in principle, they should be safer. But human drivers seem to be able to detect some dangers better than the autonomous vehicles.

I must admit that even though I would like to see autonomous vehicles become ever more viable as a transportation option, I have never been on one that traverses city streets and suspect that I would feel apprehensive about doing so in the near future. But then, I am a particularly risk averse person and thus am not the best person to usher in brave new worlds such as these. I am a follower of such things, not an early adopter.

Comments

  1. says

    I imagine that your reaction is akin to many people when they first encountered elevators: It was new technology, and if it malfunctioned, it could kill you. It is easy to forget that you can die or be seriously injured if you fell down a flight of stairs. So yes, the question is, are elevators safer than stairs? In many cases, they certainly are more convenient (if I need to move one or two floors and am unencumbered, I’ll first see if there are stairs nearby, and if so, I’ll take them as it seems to be quicker).

    I look forward to when autonomous cars are safer than human-driven cars because it will save me a certain amount of hassle while also reducing my aggravation with people who refuse to use their turn signals, turn right on red without stopping, blow through 4-way stops because “the other guy has to stop anyway”, etc.

  2. garnetstar says

    I also think that self-driving cars could be a thing. They’d be really good for the elderly, for example, who can no longer drive themselves. But, at this point, I don’t think that they’re at all ready for the consumer market, nor may they ever be.

    Because, there is no software that doesn’t have bugs, that functions properly *every single time*. Goodness knows, ever since computers appeared, I have never encountered any perfect software. And, in a car, you might not have time to “turn it off and turn it back on”, a ubiquitous requirement with most other computers.

    Also, as yet software can only do and respond to what the programmers have told it to do and respond to. In other words, what the programmers thought of. In a novel dangerous situation, the car won’t be able to think of and choose an appropriate new response, because it can’t think. However humans are careless, they can always adapt their responses to a new situation.

    So, might be a long while before self-driving cars are good enough to have fewer accidents that human-driven. We’d have to get equal numbers of them out on the road to see. Perhaps advanced AI could think fast enough to adapt an instantaneous, good, appropriate response to a completely new situation, but there’s still the software-bugs problem.

    Think about, would I travel in a self-flying airplane, no pilot on board? Then why would you get into a self-driving car?

  3. Ridana says

    When I think of self-driving cars, this scene from Logan with the horses is what I think of and it still creeps me out. 🙂

  4. Trickster Goddess says

    garnetstar:

    We already almost have self-flying planes, with auto pilot flying and assisted take-offs and landings. And there are far fewer obstacles to avoid in the sky than on the ground.

    I am a big fan of self-driving trains. I ride them almost every day. They are much more efficient at moving large numbers of people around than streets clogged up with self-driving cars.

  5. birgerjohansson says

    A lot of unexpected things happen in streets. Not so much with rail.
    Even less so high up in the air, although bird hits and clear weather turbulence are real problems. And if things go wrong, expect high mortality numbers.

  6. garnetstar says

    There has already been at least one plane accident from a software bug, even though there were pilots on board.

    It was a while ago, a USAir jet landing at PIttsburgh. They were landing, were about 200 feet above the ground. Some software glitch snapped one of the flaps (or the things on the tail, or something on the outside on the plane that moves back and forth) into the wrong position for landing. So, the plane dropped to the ground and everyone on board was killed.

    I’m sure that, most of the time, pilots could use the computer to fly the plane nearly the whole flight. But, *snything* unexpected, any software glitch, you’ll need pilots. Pretty sure that no computer would have made the decision to land in the Hudson, and then been able to do it.

    And, as I say, without pilots, who’s going to turn the computer off and turn it back on?

  7. says

    Adding to what Trickster Goddess said @5, I am a software engineer at Collins Aerospace (for one more month, anyway) who works on making planes more self-flying. Their point about there being “far fewer obstacles to avoid in the sky than on the ground” is one I bring up as to why self-driving cars won’t be safe anytime soon! A self-flying plane doesn’t have to worry about hitting pedestrians, for example. I would add that the rules of the sky are more strictly enforced as well as better defined. Plus human air traffic controllers are still very important to the entire process. This makes programing airplanes much simpler than cars. And yet…programing airplanes to be self-flying is very difficult! (Granted, one big factor in this is all the mechanics of an airplane. That part is much more complex than that of cars.) I have been working there for almost 18 years and the advances we have made in our technology has been very minimal. Much of my time at the company, without going into too much detail, has been fixing issues in the software or the testing of the software. Little has gone into new development.
    That said, one reason for this is because we have to thoroughly test our software per regulations. This is, in my opinion, a very good thing. But also a point that frightens me when it comes to cars. There are not the regulations with cars that there are with aircraft. Thorough testing is key for eliminating bugs in software and, without regulations to enforce these companies developing self-driving cars to be thorough with their testing, I simply do not trust them to do an adequate job.

    And this brings me to the article that John Morales posted @4. There is a key in that article that should not be overlooked: “And because California law requires self-driving companies to report every significant crash, we know a lot about how they’ve performed.” The emphasis there is mine. As I understand California law (I am an engineer, not a legal expert!), that is an accurate summary of the law. But…what does “significant” mean? And another problem: They only need to report such crashes. Not malfunctions that do not lead to a crash! Those (again, as I understand the law) do not need to be reported! I have heard, for example, about the Cruise cars getting confused about what to do and simply stopping in the middle of the road. And then sitting there. In the middle of the road. For many minutes. Such events do not need to be reported so long as no crash happened! But is this safe? No! It is not! I have to note the author states that “other vehicles ran into Waymos 28 time.” But the author appears to assume this means the drivers of those other vehicles were at fault when maybe some of these Waymos stopped suddenly in front of them for no clear reason. Yes, such accidents might still legally be the fault of those drivers for tailgating the Waymos…but my point is that maybe some of these would not have happened at all if it were not for malfunctions with the Waymos. The author does not appear to fully question this. (Though, I have to acknowledge I am speculating a bit here based on what I have heard about these vehicles. I have not seen the crash reports the author claims they have and so I do not know what level of detail the reports go into. Maybe these reports do go into enough detail to clear the Waymos of fault. But I am skeptical. And, as I read, the author says later down that “perhaps all of these crashes…were the fault of the other drivers,” which would seem to indicate that these crash reports do not try to assess fault. If true, that seems a critical gap in the reports.)
    One more potential problem with the Waymo data is that the author says Waymo started out in Phoenix. That’s in Arizona, not California. So the author looked at crash reports in California, but, when they were figuring out the crash rate, did they only look at miles driven in California or in all locations? Their writing suggests they did make such a blunder.
    There’s another problem in the article: “These were overwhelmingly low-speed collisions that did not pose a serious safety risk.” Sure…but, as far as I am aware, these cars are not out on the freeways that much. I understand they’re being tested mostly at low-speeds. So it is incorrect to compare the crash reports of these cars to that of human drivers when the cars are not necessarily being used in a manner reflective of the average person’s driving habits.
    As I continue to read through the article, the author ends up acknowledging as much: “Both Waymo and Cruise have their driverless cars avoid freeways.” They also acknowledges that what this really means is “that there’s a lot of uncertainty about these figures.” Yeah, no crap!
    Finally, the author does acknowledge a problem that should have been at the very top of their analysis: “On the other hand, a small minority of drivers—including teenagers, elderly people, and drunk drivers—account for a disproportionate share of crashes. An alert and experienced driver gets into crashes at a rate well below the national average.”
    There are better solutions to these problems than self-driving cars! Drunk drivers need to call a cab! Elderly people probably need to live within walkable distance of places they need to go. And in all of these cases, better public transportation would likely do wonders!

  8. Marja Erwin says

    “aggravation with people who refuse to use their turn signals”

    To me, it seems uncommonly decent to refuse to fire them. I’ve sometimes found myself in the stroad I was waiting to cross after being hit by flashing lights from multiple directions. You don’t know who might be sensitive to these things, or how sensitive.

  9. John Morales says

    Leo, I appreciate you that looked at the article and shared your thoughts about it.

    One conceptual distinction I think matters: safety is one metric, as is total the number of accidents. But they are not the same metric.

    Right? Accidents/1M Km is not the same as Injuries/1M Km.

    But yes, good point regarding the reporting, in particular on the times it might have been bad, except it wasn’t by fortunate circumstance.

    Also, apples and oranges, another good point. Horses for courses — and freeways are fraught.

  10. sonofrojblake says

    California law requires self-driving companies to report every significant crash (plus discussion following as to what “significant” means)

    I work in the chemical industry. For over 20 years I’ve worked on sites featuring major accident hazards. It’s a basic principle that for every fatal accident there are a small number of serious injuries, a larger number of less serious injuries, and a MUCH larger number of near miss incidents where no injury occurred, but could have.

    UK law mandates the reporting of significant injuries. At least here, “significant” has a VERY specific legal definition which can be found in the Reporting of Injuries, Diseases and Dangerous Occurrences Regulations 2013.

    Any responsible site monitors its safety closely, and reports those it must… but also, crucially, records and investigates safety incidents that fall below that threshold. This doesn’t just mean injuries -- it means ANYTHING, and it means “near misses” too. Example, from my own experience a couple of years ago: an incident occurred on a site where I worked that involved someone inadvertently dropping a two-foot long piece of threadbar, which fell something like 15 metres to the ground. Nobody was hurt -- but there was a thorough root-cause-analysis investigation anyway, and actions were taken to prevent it ever happening again (or at least to significantly reduce the likelihood).

    If self-driving-car companies are not actively and thoroughly recording and investigating EVERY near miss, and taking steps to prevent recurrence before they reintroduce the vehicle to the road, then they’re negligent, in my opinion.

    My impression is that for reasons of profit, demonstrably relatively unreliable vehicles have been released onto public roads not in ones or twos, but FLEETS. I’d expect to read in these stories something along the lines of “the SDC fleet was immediately demobilised until the problem was identified, fixed, and the fix was rolled out to the fleet”. I’ve never read anything like that.

  11. John Morales says

    sonofrojblake, apparently bamboozled.

    My impression is that for reasons of profit, demonstrably relatively unreliable vehicles have been released onto public roads not in ones or twos, but FLEETS.

    Heh heh heh.

    Oh, right. Making money by selling what people don’t want.

    I’d expect to read in these stories something along the lines of “the SDC fleet was immediately demobilised until the problem was identified, fixed, and the fix was rolled out to the fleet”. I’ve never read anything like that.

    Obvious reason: no such problem.

  12. Holms says

    #5 trickster
    Aircraft are not ‘almost self-flying’. What we have are aircraft with lots of automation of technical processes, with two highly trained humans making monitoring the instruments and making decisions at all times.

    ___

    #9 Marja

    To me, it seems uncommonly decent to refuse to fire them. I’ve sometimes found myself in the stroad I was waiting to cross after being hit by flashing lights from multiple directions. You don’t know who might be sensitive to these things, or how sensitive.

    Could you clarify? It looks as if you are suggesting the use of blinkers ought to be reduced, just in case someone has a sensitivity to blinking lights. I sincerely hope that is not what you mean.

  13. flex says

    I’ve worked in the automotive industry for almost 30 years, originally with interior devices, then electro-magnetic compatibility (EMC) testing, then radio-frequency systems, and now on braking modules. I have tested, for reliability and EMC, early models of vision-based driver assist systems which are the backbone of today’s autonomous driving initiatives. I have not worked directly on the designs themselves, but because I spent some time testing them I have some insight into how they work.

    First, the automotive safety field is highly regulated, and the major players in this area, i.e. long-time makers of automobiles and automotive suppliers are very cognizant of safety and safety requirements. I don’t know about the newer entrants, like Google and Tesla. I don’t think it has sunk into them yet that senior engineers have been jailed for ignoring automotive safety regulations. They will probably learn it soon enough.

    A couple of comments from above are probably worth addressing. The investigations of near misses. To confirm what was
    mentioned above, to my knowledge that doesn’t occur. The primary reason is probably because until the few years the technology to record a near-miss was not installed on cars. It still generally isn’t. It’s difficult to investigate a near miss if there isn’t any data, at best there is a correlation between a fault recorded by a vehicle and a report from an outside agency, like a rider in the vehicle. If the operator’s only get a notice that a near-miss happened, and nothing in the vehicle logs showed any near-miss condition it is hard to determine what went wrong. That does not mean that the report of a near-miss is fraudulent, it is certainly possible that the vehicle sensors missed something, but it’s hard to correct a concern if the data isn’t there.

    A comment was made above about programming autonomous vehicles. The programming is not done like traditional programming. The programmer does not have to identify and account for every possible road condition. The programming is performed through machine learning, where the system is exposed to thousands of hours of videos of driving along roads, and the system is expected to learn to recognize roads, lanes, other vehicles, etc. I’ve watched some of the machine learning occur, and it’s pretty fascinating. But it’s pretty crucial to remember that these systems do not think about anything, they don’t see a telephone pole and think, “That is a telephone pole”. They recognize a series of pixels in a specific pattern and have learned that those patterns are not part of the area it should be directing the vehicle at. The same with mailboxes, curbs, other vehicles, and pedestrians. Most of the time, when I read a report of an autonomous vehicle failing, I can identify what part of the environment the autonomous system couldn’t recognize. White semis are notorious because they can appear to be just white space and not be recognized by the autonomous system at all.

    Note, this process is very similar to the techniques used to generate the AI images which hit the news earlier this year. The machine learning is very much the same, the difference is that deep-learning data is used to navigate a road instead of simulating the work of an artist. That is, while how the machine learned data is used is different, the learning process is pretty similar. One major difference is that if an AI image generator screws up the number of fingers it doesn’t put anyone’s life in danger.

    That leads us back to the topic of safety. Maybe it is reasonable to make a comparison between the AI-images and autonomous driving. When I read about autonomous driving, and the number of road-miles and the number of issues which crop up, it seems to me like the two machine-learning processes are about on par for accuracy. With certain prompts, i.e. under certain conditions, the AI-image generation creates some amazing artwork which would be very difficult to distinguish from a master artist. Similarly, under certain driving conditions the autonomous vehicles perform like expert drivers. However, with different prompts, and different driving conditions, it becomes painfully obvious of the limitations of both of these machine-learned processes. Again, the results of these limitations differ, one leads to incredulous laughter the other might lead to deaths. To put this discussion into FMEA terms, the occurrence and detection levels may be the same, but the severity would be very different. The main efforts in both systems these days is to reduce occurrence.

    Are autonomous driving systems safer than human drivers? In many situations, maybe even most situations these days, they are. But not all situations. Both autonomous vehicles and human-operated vehicles do things like get confused, stop in the middle of roads, make stupid decisions, etc. As mentioned above; inexperienced drivers and impaired drivers make up a disproportionate amount of accidents/near-misses. I submit that autonomous vehicles are probably superior at driving than those sets of drivers. But the autonomous systems are not better than most drivers on the road today. Not yet.

    As a final note, when I first started hearing about the development of autonomous driving system over twenty years ago, the focus was on developing systems which would operate in conditions with fewer variables. I.e. the development was focused on learning highway-driving conditions. The reason was that driving on a highway was already a fairly well-controlled environment. All the cars are expected to travel in the same direction, lanes were well defined, pedestrians, cyclists, tractors, were very rare (technically forbidden). The hardest part of the problem was dealing with merging traffic. The idea was that human drivers would still be needed until getting on the highway, then the autopilot could take over with a warning system as the pre-defined exit approached.

    I’ve been amazed that the focus has shifted to low-speed vehicles in city driving. I can only assume that someone decided that people are more likely to live through a 35 MPH collision instead of a 70 MPH one. That’s absolutely true, but the conditions the vehicle needs to detect and account for are exponentially larger. When they perfect urban driving, highway driving should be a piece of cake in comparison.

  14. sonofrojblake says

    it’s hard to correct a concern if the data isn’t there

    This is not an acceptable response. “We don’t measure it, so we can’t be expected to correct it” is the sort of thing that could get you jailed if you said it in the context of almost any other industrial accident scenario. Note it’s a “don’t”, not a “can’t” -- the sensors missed it? Then you need more sensors -- ground your fleet until they’re all installed and tested. The sensors caught it but the software didn’t act on it? Then the software needs updating -- ground your fleet until it’s installed and tested. It’s clear from the continued reports of accidents, even fatal accidents, that whatever level of protection analysis has been carried out on these systems hasn’t identified the gaps, and there simply aren’t enough layers in place to reduce the risk to a tolerable level.

    Note: if I install something on a chemical site, there’s a tolerable level of risk that I have to achieve if there’s the possibility of affecting the health and safety of someone on the site. If there’s a possibility of affecting someone OFF site, then the tolerable risk level is an order of magnitude lower. Driverless cars are by definition and entirely “offsite” risk -- the tolerable risk levels absolutely should be incredibly low.

    I also don’t really agree that just being safer than a human driver, even an impaired human driver, is tolerably low. If you’re hit and hurt by a driver who is impaired -- e.g. drunk -- then there is legal recourse against that driver, who is definitely responsible, personally. If you’re hit and hurt by a driverless car -- who’s responsible? Certainly not one person. For that reason just being better than a drunk doesn’t cut it, in my opinion. You’ve got to be at least as good as me at my sober and undistracted best before you should be allowed on the road. And self-drivers demonstrably aren’t, yet.

    Don’t get me wrong -- I want one. I thought I’d have one by now. I had conversation with my wife in 2014, and predicted that by the time our kids were old enough to learn to drive they wouldn’t have to bother, and by the time THEIR kids were old enough, it would be illegal for an unaided human to operate a car on public roads. Turns out driving is HARD, and I no longer believe my predictions. Who knew?

  15. says

    Even if self-driving vehicles turn out to be safer than human drivers, is the improvement really enough to justify cutting yet another large number of humans out of decent jobs (in this case, cabbies, chauffeurs, truck drivers, etc.), and closing off yet another opportunity for those humans to pay their own way? Will yet another surge of automation really make our society and overall standard of living better? I really don’t think it will, and I really don’t think the loss of yet another chunk of decent jobs is a fair price to pay for what will probably turn out to be a rather small increase in overall road safety.

    I don’t think we should ban ALL automation, but maybe a ban on self-driving vehicles might be a good way to keep at least that class of decent jobs for humans, and thus keep at least a few more of us from being stiffed out of the money economy.

  16. says

    If you’re hit and hurt by a driverless car — who’s responsible?

    EVERYONE BUT THE TECH-BRO GENIUSES WHO DESIGNED THE CAR AND BRAGGED ABOUT HOW WUNNERFUL THEY WERE!!! Seriously, that’s what the carmakers would say, and they’ll have lots of well-paid lawyers backing them up. Ensuring accountability for such injuries would be a nightmare of litigation — which is, IMHO, one more damn good reason to simply not allow driverless vehicles at all, at least not until we have a well-established standard for who is responsible for what.

  17. invivoMark says

    is the improvement really enough to justify cutting yet another large number of humans out of decent jobs (in this case, cabbies, chauffeurs, truck drivers, etc.)

    If driverless cars were actually safer, then yes! Those are NOT “decent jobs.” They’re not healthy, they’re among the most dangerous jobs with high rates of injury and death, and they’re mostly not well-paying because the driver is often expected to own, maintain, and insure their own vehicle. Since most drivers aren’t technically “employees” they do not get health insurance, life/disability, retirement, childcare, or other benefits. After factoring vehicle maintenance, repairs, and insurance, drivers can often earn below minimum wage, which many lack the understanding to calculate.

    Companies are taking advantage of their drivers, and the sooner we can get rid of these awful jobs, the better.

  18. flex says

    @15, sonofrojblake, who wrote,

    This is not an acceptable response. “We don’t measure it, so we can’t be expected to correct it” is the sort of thing that could get you jailed if you said it in the context of almost any other industrial accident scenario.

    I’m not arguing with you here, but while a lot of modern industry developed with information gathering in mind, the automotive industry is, frankly, older than that. Early in the development of airplanes, sensors were placed to monitor a lot of the response to the surroundings. This probably was a result of the defense industry wanting to build better planes for combat (I don’t really know), but it transferred to civilian transportation. Accidents, and even worker deaths, in the chemical industries led to determining the cause and adding protections as well as sensors to monitor exposure. Modern manufacturing plants have adopted similar safety measures. Although a few, like the meat-packing industry, appear to have successfully avoided too much (needed) regulation. Agriculture and food production have often been exempted from labor and safety laws required in other industries.

    But for the automobile it has always been viewed as the responsibility of the driver to be the sensor array, not the vehicle itself. So while the technology has been there, it has only been recently been added to cars as part of the autonomous vehicle initiative. There isn’t half-a-century of history to learn what things are important to monitor. That is changing.

    If you’re hit and hurt by a driverless car — who’s responsible?

    Yes. That is the million-dollar question. Literally. There are millions of dollars in liability at stake here.

    So far as I know, no one has a good answer to that yet. With no human operator to blame there are a number of possible people/groups who could be claimed to be at fault. Fault could be assigned to the owner of the vehicle, the maker of the vehicle, the state which granted the license for it to operate, the creator of the learning algorithm, or even to to the company which compiled the data for the machine-learning to learn from, and there are undoubtedly others who could be idenfitied.

    Are there solutions? Yes. One solution is to spread the liability across the entire society and have the state manage it. This would drive the state toward state-funded medical care, and tighter regulation. A second option is that the producers of these autonomous vehicles, the Ford and GM’s, would bear the liability. The third likely option would be that owners would have to buy special insurance to cover the liability of owning a vehicle capable of autonomous driving.

    That being said, if individual owners are asked to purchase significantly more expensive insurance to over the liability, the market for these vehicle will take a long time to grow. At the same time, based on the past history of USA capitalism, this appears to me to be the most likely place where liability will be placed.

  19. seachange says

    FWIW during my daily walks here in Los Angeles I’ve been seeing Waymo cars with drivers in them pass me by on the street. My impression of them is that these machines are like human Beverly Hills drivers -when they are in Beverly Hills-, in which location they are very exact.

    A near-miss *is* a miss. Humans near-miss all the time, we’re human. The companies having these non-reported near-misses have property concerns. They might not be reporting these property concerns to us, this is true. I don’t think the implied thing in that article and here in the comments that they just don’t care about these near-misses is true. Right now it seems to me likely that they really really don’t wanna damage or destroy their vehicles. So it is not true. Depending on how profitable running them is compared to their losses from lawsuits from property and medical costs, they might not care in the future?

    Marja, I get seizures, I do not drive. Do not drive at all Marja. Just. Don’t.

  20. Deepak Shetty says

    I for one would really like to see self-driving cars become an everyday reality

    As would I . i hate driving (Star Trek teleporters is my favored mode though).
    There is a related article about how the driverless taxis are currently economically viable
    https://blog.dshr.org/2023/11/robotaxi-economics.html

    Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle.

    @raging Bee @16

    I really don’t think it will, and I really don’t think the loss of yet another chunk of decent jobs is a fair price to pay for what will probably turn out to be a rather small increase in overall road safety.

    The issue is not about jobs though. Automation should reduce the amount of work we need to do -- Why shouldnt we for e.g. , if we have surplus humans , not push for a 4 day work week so that more people can get employed for the same amount of total work done as a 5 day work week etc ?Issue is unregulated capitalism which with or without automation is going to screw everyone other than the top 1%.

  21. Marja Erwin says

    I don’t drive either. I still have trouble walking down some local stroads, and more trouble crossing them, and sometimes get hit by cars.

  22. Marja Erwin says

    > It looks as if you are suggesting the use of blinkers ought to be reduced, just in case someone has a sensitivity to blinking lights. I sincerely hope that is not what you mean.

    Well yes, of course.

    With my sensitivities, it’s a lot of painful flashing, and blurs around the flashing, and general disorientation, and the risk of worse. None of that improves visibility.

    For someone without these sensitivities, if the painful flashing isn’t as painful and/or doesn’t cause the same blur and disorientation, then you’d still have several potential hazards all trying to out-flash all the other potential hazards, so … well, does that make it any easier to track everyone who’s flashing? or just harder to track someone in the crosswalk?

    Attentional arms races leave everyone worse off, and some people a lot worse off.

  23. chigau (違う) says

    Marja Erwin
    It is a good thing you are not permitted to drive.
    Driving involves A LOT of co-operation.

  24. Deepak shetty says

    @john Morales
    Heh I wouldn’t have thought of that as a use case
    -- I suppose Jan 6th would be easier with a teleportation device

  25. Holms says

    #24 Marja
    Seconding chigau. Blinkers are there to inform all nearby people about the car’s movements before they happen. Failing to use them is detrimental to road safety.

  26. sonofrojblake says

    I’ve sometimes found myself in the stroad I was waiting to cross after being hit by flashing lights from multiple directions. You don’t know who might be sensitive to these things, or how sensitive.

    It is civilised to make reasonable accommodations for the disabled -- ramps, accessible toilets, audible indicators for the vision impaired at pedestrian crossings and so on.

    Society is aware, far more than it was when I was growing up, of the dangers of photosensitive epilepsy, and warnings abound on video games consoles, cinemas, nightclubs, art shows and all sorts of other places where flashing lights might be encountered.

    On the other hand, if simply walking down a typical street features too many flashing lights for you to not have a seizure and walk into the path of an oncoming vehicle, I don’t think it’s unreasonable to suggest that you need to find a way to deal with that.

  27. says

    Wow, sonofrojblake, so nice of you to lecture someone about something they (and not we) have already been dealing with, possibly for many years already. Who ya gonna do next?

  28. Deepak Shetty says

    @sonofrojblake @28
    Atleast for transporter-2 I preferred The Prestige (the book’s ending , not the movie )

  29. sonofrojblake says

    @Raging Bee, 30:

    Who ya gonna do next?

    The next person who explicitly states that they think the use of mandated safety devices on ubiquitous equipment should be reduced because of its effect on a vanishingly small minority.

Leave a Reply

Your email address will not be published. Required fields are marked *