Liability for self drive cars


Self-driving cars are already here and even legal on some public roads. It is likely that they will be better drivers than many of the humans that drive cars since they have faster reaction times and do not suffer from the impairments and distractions of people, such as talking on cell phones or even texting. They could be a boon for older and disabled people who will no longer be limited in their ability to go places.

The one big problem has been that even the best drivers can get into accidents and in the event of such an accident, the question will be about who is liable for the accident if the self-driving car is deemed to be at fault. The owner of the car? The manufacturer? The car’s occupant? If the car owner has no choice at all in how the car behaves, then the liability shifts to the manufacturer, which they would like to avoid.

It is clear that the programmers need to make choices in how cars react to tricky situations. But this raises some ethical issues that David Tuffley discusses.

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

If car makers install a “do least harm” instruction and the car kills someone, they create legal liability for themselves. The car’s AI has decided that a person shall be sacrificed for the greater good.

Had the car’s AI not intervened, it’s still possible people would have died, but it would have been you that killed them, not the car maker.

So now comes along an idea to give drivers the option of choosing settings so that they would be responsible for what happens but this too raises other ethical issues.

The user gets to choose how ethically their vehicle will behave in an emergency.

The options are many. You could be:

  • democratic and specify that everyone has equal value
  • pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example
  • self-centred and specify that your life should be preserved above all
  • materialistic and choose the action that involves the least property damage or legal liability.

Jason Millar discusses some of the other complex ethical issues these cars present which are similar to the famous trolley problems that ethicists like to pose.

It seems likely that the government will have to step in and make the determinations or at least set some limits on liability (such as exists for vaccine makers) in order to get around some of these complicated ethical issues.

Comments

  1. Chiroptera says

    If car makers install a “do least harm” instruction….

    What if self-driving cars decide that humans are inherently a risk and that the least harm would be world without humans?

    Be afraid. Be very afraid.

  2. moarscienceplz says

    What if self-driving cars decide that humans are inherently a risk and that the least harm would be world without humans?

    Unless the cars are also programmed with the instructions to make new cars, some humans will have to be kept around. I, for one, welcome our new Automotive overlords!

  3. smrnda says

    From what I have heard, a number of self-driving cars have had some sizable time on the road, and so far, few if any have been in accidents.

    Understandably something *might* happen, and I’m glad people are looking into that (standard AI design problem) but I don’t want to see self-driving cars not take off owing to worries about what they would do in emergencies when they are quite likely better drivers than we are.

    I do admit part of this is just because, being legally blind I’m not able to drive, and it would be convenient to be able to use a self-driving car at times.

  4. sc_770d159609e0f8deaa72849e3731a29d says

    Surely the obvious solution would be to ensure all self-drive cars are covered by insurance as part of the sale or lease. As they are more widely used the premiums can be adjusted to reflect the actual cost.

  5. says

    Wouldn’t that be solved with a “switch to manual” option? I love the idea of self-driving cars very much, but it should be something that can very easily be turned on and off. It should also be able to turn itself off in the case of a possible “threat” (like an accident or something). In which case, the driver is back in control.

    But I also feel like, in a world of self-driving cars, there would be measures put in to place to minimize accidents, eventually getting rid of them completely and all cars can be self-driven.

  6. se habla espol says

    Let us never forget the Iron Rule of Liability: depth of liability is always proportional to depth of pocket.

  7. Who Cares says

    @NateHevens. He who hates straight, white, cis-gendered, able-bodied men (not really)(#5):
    Wouldn’t work. You are in your nice self-driving car doing something else besides driving. Suddenly the alarm goes since there is a threat that needs to be dealt with by you. By the time you recover from the shock of the alarm going of it is to late. Even if you manage react immediately you need to get your bearings, find out what and where the threat is, get yourself into position to control the car and all the while the alarm is distracting you.

    And what measures are there to minimize accidents? Removing the driver results in a reduction since a majority of accidents happen due to driver inattention. That leaves external initiators of accidents (The kid running from between parked cars is a popular one used by critics) and new ones that occur due to the use of self-driving vehicles.

  8. jesse says

    This just shows that self-driving cars are one of those things that sounds great in principle, but raises all kinds of technical problems as well as ethical ones.

    Part of the problem is what you are programming a self-driving car to do. A human is “programmed” already to preserve the self, so even in cases of driver inattention-caused accidents we kind of already know what a person will do. (Even if they do it badly).

    So what do you tell the self-driving car to do? Preserve the occupant? The other people? What algorithm shall we use to determine that? Oy vey.

    The other problem is technical. Self-driving cars can’t be autonomous — to orient themselves and all that other stuff they have to be networked. (There is a reason that the Army’s self-driving vehicle challenge has so far produced only slow ones that haven’t even completed the course). That means a connection to a cell network at the very least. Now what happens when there’s a blackout? You either have all the cars stop instantly (not good since they won’t be able to track every other car on the road, and cars will all have differing masses and momenta) or suddenly 50,000 drivers are asked to go to manual with no preparation at all. Um.

    And then there’s the hackers. Even if someone isn’t being malicious, how long before your 12-year-old thinks it would be just hilarious if your car was programmed to drive the school principal to the sex toy shop or strip club? Any system can be hacked, most more easily than we like to think.

  9. doublereed says

    There are other issues I have with self-driving cars. What about emergencies and construction? I live in an area where construction is just constantly changing the makeup of the roads. Sometimes you have to make a left-turn out of the right turn lane.

    Or emergencies. What happens when sirens blare and you have to move to the side of the road to let a police car or ambulance through? Are emergency vehicles going to send a signal out to robot cars to clear the way? Well then criminals could use such a signal.

    There’s a lot more problems to the idea of self-driving cars. The fact is that it’s pretty common that you need a human driver to properly navigate a situation.

  10. Mano Singham says

    doublereed,

    I have seen the cars on test runs navigate around barricades that were not on maps. As to sirens, I suspect that the designers have taken this into account but I don’t know how.

  11. Pierce R. Butler says

    My proposal: extensive mass transit networks with both human and computer controllers, connected to intra-urban fleets of (personal and rental) self-driving mini-carts unable to reach dangerous velocities.

    Yeah, I know -- we don’t have any other reasons to completely retool our transportation systems, do we?

  12. Chiroptera says

    I’ve lost count how many times I’ve almost been run down as a pedestrian by some clown who isn’t paying attention. If a self-drive car can almost run me down more efficiently, then I’m all for progress.

  13. Who Cares says

    @Jesse(#9):
    Cars can be self driving without the need for a network. Google has demonstrated that with cars that have done thousands of miles on the normal roads (maybe even millions seeing that in 2012 they already clocked 300 000 miles). That said they require a map that is precise to the cm level. Meaning that currently can only drive where they’ve been around mapping the area to that level of precision.
    About car to car networks Google has said in a podcast when asked about car to car networks that it is possible but not a priority for them. And yes there is a serious amount of sever level processing going on but that is not required in real time and can/is done ahead of time, Google claims that the real-time part is doable on a desktop level computer.

    @Doublereed(#9):
    The current setup of sensors in the cars used by Google are a radar, a ladar/lidar, visual cameras. That is enough to react in time to unexpected obstructions. How the car reacts to your example of having to make a left turn out of a right turn lane depends on if it can successfully read and decipher the signs. It is a bit mind boggling but those cars can OCR signs then translate the image/words to an action. And it can detect emergency service vehicles when they have lights & siren going, not sure if that is purely optical or if they have microphones as well.

    The car isn’t there yet. Even though Google is managing an impressive feat of getting a machine to learn to drive it is still bad at applying existing knowledge in a new situation, unless the new situation is very similar to one encountered, logged and analysed earlier. This is one of the reasons that they are trying to drive them as much as possible. So if the car doesn’t know how to bypass those road works in your example the test driver will take over. Later on the logged action and the action that the car wanted to do are compared to extract a new scenario. This scenario is then used to train the car how to react in the situation.
    The troubling implication is that the car will have problems when extreme situations occur that haven’t been thought up before as scenario.

  14. says

    If the setting is “do least harm” then the only way there’s liability is if the manufacturer has negligently programmed the car such that it does not in fact do least harm.

    The “protect the owner primarily” option makes me wonder: is there any legal liability for a manual driver who behaves in such a way? In other words, if a person can show that the choice was between schmearing a pedestrian or becoming canned tomato paste hirself, does the law currently hold that driver liable for the death, or is it considered reasonable not to sacrifice yourself? (I assume of course that there’s no other reason to assign guilt to the driver.)

  15. kevinkirkpatrick says

    To address the thrust of the original post, frankly, I think the ethical debate has all the practical value of “how many angels can dance on the head of a needle”. When it comes to a collision situation, as with every maneuver a self-driving car chooses to make, all effects of all choices will always be probabilistic, and calculated with floating-point arithmetic which will not leave “ties”. Of all options available to the AI operator, there will always be one with lowest expected value of lives lost (in astronomically-rare case of two options agreeing to 12 decimal points of precision, the need to make a decision could still be accomplished with a simple random-number-generator); as a society we can easily mandate that cars must use that option; and that manufactures should be liable only for negligence in making those calculations as accurate as possible (not to mention criminal charges of homicide being assessed against any car owners who override that mandate -- ie. shifting priority to higher value of lives of occupants or, more sinisterly, to the value of not causing damage to the vehicle).

    But beyond that, I really, really hope these lines of thinking aren’t hindering the pace of getting autonomous vehicles on our roads. In terms of needless, pointless, horrific deaths; deaths which leave people with full, happy, promising lives in front of them suddenly dying in excruciating pain; deaths which leave young teenagers to experience the agony of the fiery hell that is being torched alive; deaths which leave parents listening to their children -- the same bright-eyed kids who’d been cheerfully babbling and singing moments earlier -- expelling their last living breath on the planet… in terms of those deaths, I find it utterly mind-boggling to hear people balk at making autonomous vehicles mainstream as quick as humanly possible.

    I can sympathize with the case that today, in 2014, there are still some major hurdles to cross before we’ll hit the 100% autonomy point (e.g. no driver/steering-wheel/override needed). But the technology needed to handle 99% of the driving contexts of cars on the road today is certainly now within reach. And it bothers me to no end to worry that our society ( because “what if the car needs to choose between hitting a child crossing the street or crashing into the proverbial bus-full-of-nuns”) may already be delaying saving many lives that could be saved already.

    Yes, maybe one day one bad person will “hack” the computer of his nemesis’ autonomous car and cause it to crash. But offsetting that made-for-Hollywood scenario: autonomous vehicles will not get bored. They will not get distracted. They will not look away from on-coming traffic to change a radio station, or to check the map, or to see why the phone just buzzed, or to look over at a beautiful sunset, or try to read those frustratingly small address digits on the service-station to see if they’ve passed shop they’re trying to find. They won’t fail to incorporate road conditions into their speeds. They won’t swerve because a bee flew in through the window, or because their cup of coffee spilled on their knee, or because a kid in the backseat suddenly shrieked in pain. They won’t lose track of the distance between their wheels and the curb, and when their front passenger tire blows at 65 mph, they’ll turn into the sudden swerve, immediately straighten out, and come to a safe stop on the shoulder of the road. They won’t tailgate, inadvertently cut off other cars, fail to yield right-of-way to left-turning cars, or mistake 2-way stop signs for 4-way stop signs. They won’t become enraged and react with aggression due to the unsafe or unappreciated driving behaviors of other drivers (though the will immediately react to minimize the hazards raised by such driving). They won’t get drunk or otherwise operate under the influence of alcohol, marijuana, or other mind-altering substances. They won’t get tired, and will never wrestle with the temptation to rest their eyes for just a quick second. Neither their vision nor their reflexes will degrade with age. They won’t cross three lanes of traffic because their exit “snuck up on them”. They won’t miscalculate whether or not they can safely pass the slow-moving tractor in front of them, given the distance and 130 mph closing speed between them and the on-coming traffic. They won’t spend 1,500,000 microseconds staring stupidly at the brake lights of the car in front of them before deciding that decelerating themselves might be a wise choice.

    As an American, 9/11 made a hugely lasting impression on me; with much of that emotion fueled by the astounding needlessness of it; by the utter tragedy of so many lives being cut short so abruptly. And yet, I’m astounded at how easy we seem to find it to shrug off the experience of those 3,000 men, women and children (in the US alone) who died just as tragically in car accidents in that same month; and at how easy it was to go into October 2001 without pausing once to reflect on the on-going suffering and sorrow of the families and loved ones who survived them. It’s as though we believe, I guess because “car accidents just happen”, that the lives of the children who’s father didn’t make it home on September 10, 2001, were forever changed in some less-severe way than those families of people who died during the 9/11 terrorist attacks.

    Personally, I have come to hate having to drive. Not for any of the usual aspects that people complain of (in fact, I’ve always quite enjoyed the actual driving experience), but for the bigger picture. For the knowledge of just how bad a driver I am; just how bad a driver all humans are. How even professional drivers, by virtue of being human, make dozens of mistakes (sub-optimal decisions) each minute they drive; and how far short of even that standard I -- and the person in the car passing 4 feet to my left at 130 m.p.h. -- fall.

    *Those* are the sentiments that makes me so passionate in feeling, with respect to autonomous vehicles: Cant Happen Soon Enough.

Leave a Reply

Your email address will not be published. Required fields are marked *