Gary Marcus predicts that in a few decades, we may all not only have the option of traveling in driverless cars, we may even be obligated to do so.
Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.
Of course, that raises the interesting question of who is responsible and liable if there is an accident.
But an even more difficult problem that will need to be addressed is that automated cars bring with them the need for ethical decision-making systems that can be programmed.
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.
Moral and ethical decision-making is already fraught with seemingly insurmountable hurdles even for humans. Recall the experiments with runaway trolleys and the like where people are confronted with choices where each option is hard to justify rationally. How much harder will it be to write computer programs to automate them?
As Marcus points out, Isaac Asimov took a shot at making a set of rules for robots but any set of rules, however good they may look on paper, has the potential to turn into a disaster if unthinkingly applied in all situations.
zekehoskin says
One of the questions is silly. Given five injured people who need an organ instantly and one healthy person waiting for a checkup, why kill the healthy person to harvest his/her organs? Pick one of the dying victims and save the other four.
baal says
It’d be a fundamentally scary society that did your math Zeke. I really don’t want the State deciding who lives and who dies so directly. The history of the State in those kinds of decisions (cf death penalty) is mired in racial and power imbalances.
I’d be ok with mandatory blood donation from dead accident victims though (assuming timeliness and testing). Selection bias is somewhat irrelevant once the person is already dead.
@ OP
I don’t think machines will ever do ethics but in this case I don’t think they need to. If the self driving cars can be proven to have lower accident and death rates during a phase in period, then the deaths to moral failure (or more likely, programming defects) are acceptable.
The proof of concept is done. Google has driverless cars that it uses for its streetview feature and they have an excellent record. Google does have back up human drivers but they mostly just sit there.
Tim says
From a broader (global) ethical perspective, cars (driverless or otherwise) are abhorrent. Given the current data on global warming, building an infrastructure of public mass transportation powered by renewable energy would seem to be a better direction in which to focus our ingenuity.
There’s something about futuristic articles about driverless car that really disturbs me — these “better living through technology” articles take the focus off the horrific nature of the automobile.
smrnda says
If we’re really using computers to drive cars, there ought to be a way for a car to collect information about other vehicles in the vicinity long before they get close enough for a potential collision, so the ‘school bus’ example seems a bit far fetched to me -- your self-driving car should be notified of things like school buses, police, fire and ambulances.
As far as ethical decisions go, programs are really just polices you lay down and that are going to be modified once they cause problems, the same way that policies we have in place now get modified once we’ve had a disaster. The computer isn’t ‘deciding’ in milliseconds -- the decision was already made about what to do in each case before and the machine is just figuring out what case applied. It’s impossible for be sure that a policy is always best, but that’s why we change laws and other rules.
Though I appreciate ethical dilemmas, my own experience as a programmer has taught me that boundary cases are more interest theoretically but less likely to actually happen.
fastlane says
I suspect the auto industry doesn’t have nearly the same rigorous testing and certification requirements as the aircraft industry. If they did, the software, and subsequently the cars, would be much more expensive. The question, as always, is how much are you willing to spend for safety?
Still, though, robot cars will likely be safer than human drivers by an order of magnitude, at least. And we don’t have to get to 100% robot driven cars. I’m not sure where the ‘magic’ point is, but I bet that once a minimum percentage of the cars on the road are robot driven, driving in general will get safer and faster.
For instance, merging on ramps on the freeways will be smoother, since all auto-vehicles should be on a Vehicle Area Network, communicating with each other,for smooth transitions, and roads will be able to increase their capacity without actually upgrading the infrastructure. Also, the software being developed, if it can’t ‘talk’ to the car ahead or behind, already adds in an extra buffer, and assumes the driver is human.
The first software virus could be pretty devastating, though.
I also suspect, that once we hit that threshold, or near it, my bicycle commute will become much safer.
Paul Jarc says
This territory is covered pretty well at Less Wrong, and the Singularity Institute is actively working on Friendly AI.
That’s partly due to the biases built into the human cognitive architecture. It may be that pushing one person in front of a train really is the right thing to do to save five others, but whether humans should ever trust themselves to judge that it’s right to do so in real life is a different question.
An artificial decision process can have much stronger epistemic rationality than a human, since it doesn’t need to include human biases. Of course, it’s still a very hard problem to create one as versatile as the human mind.
Katherine Lorraine, Chaton de la Mort says
Most likely they’ll have driverless cars with a driver backup in case the shit hits the fan in some way. Computers may be really good, but they’re never 100%
Jared A says
Currently most cars have human-driven cars with a computer backup in case the shit hits the fan in some way (e.g. ABS). Humans may be really good, but they’re never 100%.
Marcus Ranum says
We appear te be machines, so the answer would seem to be “yes”
Charles Sullivan says
Aren’t there already automated (driverless) freight trains?
Lofty says
I look forward to going to sleep at the wheel of my new car, when I’m somewhere near age 80. Wake me up when I’ve arrived. Should be fun compared to the angst of maintaining concentration 100% of the time.
maxdwolf says
What human undergoes a moral dilemma under such a situation? Things happen so fast in an accident one just reacts, often improperly. Even the example given is bs. Swerving might well put the driver at more risk. And striking the bus could well be an aid in it stopping before going over the edge. I suspect the present algorithms just would slam on the brakes, and statistically I would argue this is the best choice.
Mano Singham says
Seriously, this is a good point. Elderly people who insist on driving when they shouldn’t pose a real risk and automated cars would enable them to have their independence and not be a danger to others.
Trina says
If the bus was also automated, it wouldn’t swerve into your path and the situation would not arise.
lorn says
How will the computer operator know the difference between a bus load of children, an empty bus, and a school bus commandeered, and possibly piloted by, drunken frat boys on a road trip?
Are we talking about all human beings having the same value? Would an overloaded party bus, with seventy people stuffed into it, rule the road as it has more bodies than any other single vehicle?
lpetrich says
Isaac Asimov also wrote a short story about driverless cars: “Sally” in “Nightfall and Other Stories”. In it, manual driving got outlawed because the authorities decided that it was needlessly dangerous.
Isaac Asimov’s Three Laws of Robotics were inspired by existing safety mechanisms. He got tired of stories where robots destroy their creators, with the implication that we were not meant to build them. So he thought up the Three Laws as safety mechanisms for artificially-intelligent systems.
However, putting them into practice yields numerous ambiguities and difficulties, enough to provide IA with abundant story material.