I for one would really like to see self-driving cars become an everyday reality, as common as cars are now. It may surprise people that many such cars are already widely used in several cities as taxis. But there are key questions concerning safety and one would hope that the companies marketing these cars would be transparent about the ability of their cars to detect pedestrians and obstacles. But Sam Biddle writes that one major company is putting its cars out on the streets even though it seems to have two key vulnerabilities: an inability to see small children and large holes in the ground.
IN PHOENIX, AUSTIN, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads. Cruise’s app-hailed robot rides create a detailed picture of their surroundings through a combination of sophisticated sensors, and navigate through roadways and around obstacles with machine learning software intended to detect and avoid hazards.
AV companies hope these driverless vehicles will replace not just Uber, but also human driving as we know it. The underlying technology, however, is still half-baked and error-prone, giving rise to widespread criticisms that companies like Cruise are essentially running beta tests on public streets.
The concerns over Cruise cars came to a head this month. On October 17, the National Highway Traffic Safety Administration announced it was investigating Cruise’s nearly 600-vehicle fleet because of risks posed to other cars and pedestrians. A week later, in San Francisco, where driverless Cruise cars have shuttled passengers since 2021, the California Department of Motor Vehicles announced it was suspending the company’s driverless operations. Following a string of highly public malfunctions and accidents, the immediate cause of the order, the DMV said, was that Cruise withheld footage from a recent incident in which one of its vehicles hit a pedestrian, dragging her 20 feet down the road.
Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them. Yet, until it came under fire this month, Cruise kept its fleet of driverless taxis active, maintaining its regular reassurances of superhuman safety.
This is just one company. There are plenty of other companies out there operating autonomous vehicles with varying degrees of success. The top ones are Waymo (Google), Cruise, Zoox (Amazon), Argo (Ford and Volkswagen), Aurora (formerly owned by Uber), Motional (Hyundai), and Poni.
When it comes to safety, the key question is safer than what? Accidents involving autonomous vehicles get a lot of publicity, especially if it involves fatalities. The promoters of autonomous vehicles claim that the standard should not be zero accidents but that the cars should be safer on average than human drivers.
We all have had experiences with bad drivers, even if we persuade ourselves that we are very good drivers. People who drive recklessly, exceed speed limits and break other rules, and even drive drunk are sadly common. Autonomous vehicles will do none of those things so in principle, they should be safer. But human drivers seem to be able to detect some dangers better than the autonomous vehicles.
I must admit that even though I would like to see autonomous vehicles become ever more viable as a transportation option, I have never been on one that traverses city streets and suspect that I would feel apprehensive about doing so in the near future. But then, I am a particularly risk averse person and thus am not the best person to usher in brave new worlds such as these. I am a follower of such things, not an early adopter.