I’ve always detested those stupid trolley problems — all I see is a ridiculously contrived situation. But maybe they do provide some insight into ethical thinking. At least, that’s what I take away from this one.
There are just too many people who are happy to tell you they think the trolley should keep going straight.
parrothead says
Do we have the option to pull the lever so hard the handle snaps off which we can use to beat the crap out of anyone that complains?
CaitieCat, Harridan of Social Justice says
And despite apologia, Christians make up the large majority of those in the Western Dems who want that trolley to keep going. Not Muslims.
chigau (違う) says
I’ve often thought there should be an option of pushing the person posing the question in front of the train.
Marcus Ranum says
I always find it interesting that people attempt to use the trolley problem to study social morals .. by using a highly manipulative utterly ridiculous problem. I think the trolley problem indicates there’s something wrong with social scientists and some philosophers.
Saad says
chigau, #3
And then destroy the trolley.
cervantes says
Yes, they are contrived situations, but they do isolate ethical problems that are relevant to reality. Think of research on human subjects, various situations of resource scarcity (e.g. medical triage, famine), the concept of “just war” and “collateral damage” — one could go on and on. People who think about ethics find these scenarios useful, and they have also quite successfully identified some cross-cultural universals. Maybe your general distaste for evolutionary psychology sours your perspective on this stuff, but these are well designed, large scale studies that really do get compelling results.
Kant’s categorical imperative wins out, BTW.
Mike Smith says
The trolley problems are highly useful for bringing out ethical intuitions and delineating various normative theories. Yes they are highly contrived and highly unrealistic. So? they are designed to be.
This is the philosophical version of someone complaining about “that stupid study of shrimp on a treadmill”
Mike Smith says
Cervantes what do you mean that Kant wins out? Kantian ethics is not the dominate position in normative theory; consequentialism is. (through I’m neither).
fakeemailaddress says
Isn’t a solution to one of these scenarios equivalent to an answer to the question of whether one is equally morally culpable for deliberate inaction as for deliberate action?
cervantes says
Mike Smith, I just mean that Kant’s categorical imperative seems to describe people’s ethical intuitions as elucidated by the trolley problem. Whatever theorists are doing lately is beside the point. Basically, people react negatively to treating other people as means to an end, even if the end benefits more people. So consequentialism loses out in people’s moral intuitions in situations where they are in conflict. You may or may not like people’s intuitions, but that’s what they are.
Marcus Ranum says
Mike Smith@#7:
The trolley problems are highly useful for bringing out ethical intuitions and delineating various normative theories.
Oh, bullshit. They are highly useful for bringing out self-reported projections from people being asked the question, but they in no way bring out anything about how those very same people would actually behave in a variety of situations.
Basically, you’re saying that people are going to tell the truth about how they’d behave in a moral dilemma involving extreme stress and death. They don’t even know how they’d actually behave in that actual situation. And the social scientists who tabulate their responses and wank over them could just as well be making up the numbers – because they more or less are.
Marcus Ranum says
The trolley problems are highly useful for bringing out ethical intuitions and delineating various normative theories.
As I said in #4, that anyone takes trolley problems seriously says more about how ignorant they are, with their pseudo-science and bad philosophy, than anything about human behaviors or morals, at all.
I also should have pointed out that there is ample evidence in the real world that people do not behave like they do in trolley problems. If someone wants to pretend that trolley problems are ‘science’ or input into a scientific process, they first need to overcome the observational evidence of the holocaust, the Kitty Genovese murder, every mass shooting ever, etc, – all of which show that people do things in moments of fear and moral stress that are not what even they would have predicted. Every hero who says “I don’t know why I did that…” is a refutation of the fucking stupid fucking dumbass trolley problem and the fucking dumbasses who take it seriously.
Speaking of which, didn’t Sam Harris have some love for the trolley problem?
Marshall says
Marcus Ranum #@11:
The test isn’t designed to ask what you would do under the extreme pressure that the hypothetical poses, it’s to ask what you think you *should* do. And these bizarre hypothetical situations are good for teasing apart some of the apparently illogical thought processes that humans have when it comes to making moral decisions.
I think Mano Singham has a great post here on some of the same issues.
Marcus Ranum says
Marshall@#13:
to ask what you would do under the extreme pressure that the hypothetical poses, it’s to ask what you think you *should* do
Same objection: people’s responses coule change depending on whether they experienced a long line at the lunch counter before they took the survey. It’s absurd to think that you can measure anything about what people think they should do based on abstract questions that are so divorced from reality as to be literally unimaginable.
You may want to do a bit of reading about the current state of the social sciences as a result of their frequent over-reliance on undergrads as test subjects. Short form: you may – barely – be able to say something like: “X% of ivy league school undergraduates who are probably not wealthy because they were willing to take a dumb survey in return for lunch money said that if they saw someone about to light a cat on fire, they’d totes do something about it.”
But the real objection to the whole thing is embedded in your response: “what you think you *should* do”
Yup. Even granted that the surveys are from a self-selected sample, almost certainly biased by age, education, and economic status (as if that’s not enough) you’re just making a list of people’s opinions that everyone agrees is disconnected from reality. So why bother?
You’re probably thinking, “Wow, Marcus has some hate-on for the social sciences! He must be either a real scientist or…” Yah, my undergrad degree is in Psych.
Mike Smith says
Ah OK Cervantes, Thanks!
@Magnus Ranum
First of all, I have no idea how the trolley problems are used in the social sciences. I don’t even care how they are used in other disciplines. I am talking about their usefulness in normative ethics. I don’t even care about how humans would actually behave in a trolley problem. That is not what they are asking from my perspective, nor why philosophers use them. This is a elementary mistake. We use them to tease out what people think they should do. You are confusing is for ought. This, to be frank, is a bit embarrassing for you.
what you call ” self-reported projections” is what I meant by moral intuitions. I care about those projections because it tell me what people think they should do. It is interesting question; it also different from what people would actually do. The trolley problems hit us with different questions of values and if we are to determine what is the moral thing to do we need to settle that question.
They are invaluable as a tool to delineate different normative theories. Kant answer the trolley problems differently than Bentham, who answer it differently from Hobbes. etc. There are some normative theories that state you must flip the switch (push the fat guy, etc.) and there are some that state that you must not flip the switch (push the fat guy, etc.).
Mike Smith says
A care about what what people they think they should do in trolley problems because they produce general principles that guide human behavior in practical reality.
A quick example: If you are a vegetarian in the Peter Singer mode harping on me because I eat meat, you better damn well answer yes to most of the trolley problems. Otherwise, you are a fucking hypocrite.
Jake Harban says
If you pull that lever, the trolley will run over the guns which will send them flying at high velocity, essentially turning the guns themselves into bullets that will shoot out in a forward arc that’s bound to hit the people further down the adjacent track.
The correct answer is to pull the lever halfway, which will cause the trolley to derail and move safely off to the side.
Marcus Ranum says
Mike Smith@#15:
We use them to tease out what people think they should do. You are confusing is for ought. This, to be frank, is a bit embarrassing for you.
No, I am not confused about that. You are teasing out what people say they think they should do at one particular moment in time which tells you, what, again?
I don’t think that there’s an attempt to go beyond merely measuring attitudes because, if there was, that would be funny. I’m OK with saying “9/10 of the people in our self-selected sample said that right now they’d do X as opposed to Y” What can you infer from that? If the information is being collected simply to make a pie chart to hang on the wall of the philosophy department’s palatial armchair-filled lounge, that’s fine. But what does that collected set of observations say other than “this is some stuff we collected!” You’re not measuring attitudes that might express anything about people’s ethical systems, at best you’re measuring their opinions about hypothetical ethics. Is there a science of Hypothetical Ethics that I’m not aware of? They must have great pie charts.
what you call ” self-reported projections” is what I meant by moral intuitions. I care about those projections because it tell me what people think they should do
I’m gonna have to give you a FAIL right there. It tells you what people say they think they should do. You’re measuring opinions about hypotheticals. You’re not measuring attitudes. In fact, I bet you have no way to tell that the subject didn’t just go “fuck this” and click buttons at random. Right? I grant you that you’re measuring something but I don’t see how you can get from collecting a bunch of stuff from a biassed sample, to saying that that says anything about what people really think about anything.
They are invaluable as a tool to delineate different normative theories
See what I mean? You’re saying that you’re taking a bunch of opinion polls consisting of what people report they feel at a given moment in time, then rolling that up and accepting it as true because subjects never lie, are deceived, misunderstand the question, or change their minds – and using that as a “tool”?? What kind of “tool”?
Unless, by “tool” you’re talking like one of those cargo cultists building a bamboo airplane and pushing it around hoping that you’ll somehow get actual measurements of stuff like physicists do.
consciousness razor says
Marcus Ranum:
You don’t mean literally unimaginable, since the standard versions are certainly easy to imagine, which isn’t the same as being likely to actually occur to you. So, is that what you meant to say literally, that it’s “divorced from reality” in the sense that the probability of a specific trolley-type incident happening to you is some ridiculously tiny number?
I don’t understand how you think this objection is supposed to work. People are presented a word problem. As you’re probably aware, those are generally pretty contrived, because they’re often aimed at isolating a certain thing and leaving aside other things. And of course the situations posed by them may be ridiculously unlikely. (“Seriously, I’m in an elevator in outer space? I have a bunch of billiard balls and a frictionless table? Nope, no I don’t; done thinking now, bye.”)
In the case of trolley problems, you’re prompted to offer a reason why you should act one way or another. It is not assumed that you must act one way instead of the other. It’s not as if the problem itself (or the results you get in terms of statistics about the most common “intuitions” of the test subjects for instance) is supposed to somehow demonstrate which choice is correct. This is a common misconception. Instead, you’re meant to evaluate these situations and offer reasons for the choices you believe you would make. You could be wrong about what those are, and even if you’re correct you could make a bad choice. But that’s a thing that you could do. It’s not done just for one person, but for lots of people to get some sense of what reasons they offer. And we can fiddle with this little gadget’s dials and knobs, so to speak, by tweaking the parameters of the problem in various ways, to see whether and how that may affect the reasons are given. You could put this gadget to some use, if you actually try to do so.
If I said “the reason I would do X is because of Y, oh and by the way, I was in a long line earlier,” what difference do you think that would make? If the reason I gave depended on something like that, then maybe it wasn’t a very good one. Is it supposed to be an objection to trolley problems that people are capable of giving stupid or arbitrary reasons, that they don’t think clearly a lot of the time, that they might change their mind about it later, that there is no ideal way to control everything about an experimental situation that involves human beings?
Marcus Ranum says
Oh if, you mean “moral intuitions” are “opinions about what people think they think”
then I agree, you’re measuring those.
Brother Ogvorbis, Fully Defenestrated Emperor of Steam, Fire and Absurdity says
Jake Harban @17:
Doubtful.
The weapons would be crushed or pushed out of the way (and damaged). In order to create high-velocity debris projectiles, the train/trolley would need to be going at quite high speed. An automobile struck by a modern freight train moving at ~45mph will send glass particles up to 20 feet — the speed required for something light to fly 20 feet is not “high velocity”. Additionally, all modern locomotives, trolleys, and other light-rail vehicles are required, in the USA, to have pilots which prevent debris (such as a pile of military-style automatic rifles) getting under the rail vehicle. This would prevent a pinch-velocity scenario.
—————
I had a conversation with a volunteer today (a right-wing asshat, retired public school teacher, TeaPartier, and preacher) which was very depressing. He stated that it was the “choice to sin against god” which killed these people, that the terrorist/hate criminal was “fulfilling god’s will in showing the destructiveness of the homosexual agenda.” He also stated that he will use this to show his flock just why the “hate and anti-Christian agenda of the left” must be attacked at every juncture. This, he said, is “god’s punishment of America allowing gays to destroy marriage” and that “god used an evil tool to punish evil.”
I try so hard to stay away from this asshole. But, rainy day, and I was eating lunch.
asbizar says
What such “moral dilemmas” remind me of is Sam Harris’s approach to morality. When you create a ridiculous scenario and try to justify something heinous based on it. This has been studied by people at the Oxford Department of Ethics and the findings are, well, not surprising at all. Here is a summary:
*According to classical utilitarianism, we should always aim to maximize aggregate welfare (sounds familiar)
*Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
*They were also associated with lenient views about clear moral transgressions.
*‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
*This lack of association remained even when antisocial tendencies were controlled for.
*So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
http://www.sciencedirect.com/science/article/pii/S0010027714002054
Jake Harban says
It doesn’t seem any more ridiculous than the prisoner’s dilemma.
Nerd of Redhead, Dances OM Trolls says
Unless you kill off the momentum toward the human victims, the trolley will just continue to move toward them on its side, not its rails. The collision will still kill/injure the humans.
Marcus Ranum says
Consciousness Razor@#19:
. So, is that what you meant to say literally, that it’s “divorced from reality” in the sense that the probability of a specific trolley-type incident happening to you is some ridiculously tiny number?
To the charge of “misdemeanor hyperbole” I plead guilty.
“Unimaginable” was too strong a position for me to defend. I would observe that asking someone to imagine what they would think, and tell you what they think they would think, in a situation that (to my knowledge) no human has ever faced in real life – it’s not “unimaginable” but it’s maybe ridiculously hard to imagine.
I don’t understand how you think this objection is supposed to work. People are presented a word problem. As you’re probably aware, those are generally pretty contrived, because they’re often aimed at isolating a certain thing and leaving aside other things. And of course the situations posed by them may be ridiculously unlikely.
Well, maybe I’m flailing for words a bit much. :/ Sorry about that. Let me see if I can reformulate it a bit:
– We are collecting what people say they would do in a particular hypothetical situation
– The hypothetical situation is so divorced from common experience that it’s guaranteed to be fictional
– Further, there are parts of the hypothetical that may be emotionally (or morally?) significant to the subject
– We can assume that the subject understands some of these things to some unknown degree – after all, they are being paid $25 to take this survey so someone must value it, and there must be some purpose behind these hypotheticals
OK. I don’t see where we get from “asked a question about a subject’s choice in a hypothetical situation and got an answer” to “knows what the subject believes about aspects of some moral dilemma.”
To do that you’d have to be able to show that the subject actually understood the dilemma, then you’d have to show that the subject answered in good faith, then you’d have to show that the subject didn’t deliberately or unconsciously skew their response in order to make the person who’s giving them lunch money the answer they want. Those are a few of the things you’d have to factor out of the responses collected in order to be able to believe you’d gotten something actually representing something about their moral systems.
That’s why I am trying to delineate “opinions about moral systems” from actually getting information about the subject’s moral systems – you can’t do that because the subject is in the way and they could be lying, in a bad mood, or whatever. I grant that these hypotheticals can allow us to make strong claims regarding peoples’ measured opinions about hypotheticals. No argument there.
Is that better?
(“Seriously, I’m in an elevator in outer space? I have a bunch of billiard balls and a frictionless table? Nope, no I don’t; done thinking now, bye.”)
Yeah, that’s exactly what I mean!!! If you’re asking someone a hypothetical that’s incredible then you’re even less likely to get an honest response that tells you anything about their actual moral system. It’s like going to a physicist and saying “OK, assume I have a perpetual motion machine and a perfectly spherical cow ….” You’re not going to get a serious answer in response to that question, even if the physicist is trying to be honest and accurate in their response. Well, if we present someone with an absurd “moral dilemma” then I don’t see how we can assume we’re measuring much more than absurdities. (i.e.: opinions about the moral dilemma)
In the case of trolley problems, you’re prompted to offer a reason why you should act one way or another. It is not assumed that you must act one way instead of the other. It’s not as if the problem itself (or the results you get in terms of statistics about the most common “intuitions” of the test subjects for instance) is supposed to somehow demonstrate which choice is correct.
Yeah, you can do a bunch of stuff to try to repair your data set by eliminating bias and giving the survey to a few Kalahari bushmen (San), asking for a descriptive part of the response, but my objection doesn’t go away. It might even get worse because now you may be measuring something about how well your San subjects understand how trains work and you’ve no way of knowing if you’re accessing their actual moral decisions or whether they just want the pig feast you promised them if they’d answer your odd questions. Having a field for “Tell me why you made the choice you did” now means you’re measuring how well the person who interprets that field decides what column that particular response should go into.
Instead, you’re meant to evaluate these situations and offer reasons for the choices you believe you would make.
Yup. And I believe that if people answer those questions as honestly as they possibly can, you’ll get their current opinion about the choices they believe they would make in that one hypothetical situation. Left as an exercise for the reader is to show how thier current opinion about the choice they believe they would make has anything to do with the choice they would make, or even their belief about the choice they should make. Right? If we accept that there may be a difference between the decision they believe they should make and the decision they believe they would make then we have a big problem, too, since – again – they could be wrong. Or they could be embarrassed to say the truth, or … I could probably come up with hypotheticals all day about why the results could be dishonest. How can the researcher tell?
And we can fiddle with this little gadget’s dials and knobs, so to speak, by tweaking the parameters of the problem in various ways, to see whether and how that may affect the reasons are given. You could put this gadget to some use, if you actually try to do so.
I can agree with that.
So, suppose you run the survey past a group of San, and a group of undergrads at an ivy league school. You might measure a difference in the responses! It might even be significant! Then, you could sit down and scratch your head for a long time and think “maybe the San didn’t understand the question because they haven’t seen a train.” So you adjust the questions and do it again and get different answers. Congratulations, you’ve measured how horribly difficult it is to get useful information about human beings’ beliefs and feelings as opposed to their responses to socially determined stuff in a biassed sample. In that absurd hypothetical of mine, I would say “congratulations, you’ve learned that San people respond differently in a mysterious but interesting way from ivy league undergraduates.” That’d make a great pie chart.
If I said “the reason I would do X is because of Y, oh and by the way, I was in a long line earlier,” what difference do you think that would make?
Really? Ok.
“I would throw the switch and watch the people die, because I was stuck in traffic all morning and I want to see the world burn”
“I would not throw the switch because I just got a new puppy and I’m full of love for everyone”
Nah, people aren’t going to answer like that. They’ll (to use social science speak) reduce their cognitive dissonance by finding a reason that allows them to feel they are adhering to social norms:
“I wanted the $25 for lunch money and I think this question is unanswerable.”
Is it supposed to be an objection to trolley problems that people are capable of giving stupid or arbitrary reasons, that they don’t think clearly a lot of the time, that they might change their mind about it later, that there is no ideal way to control everything about an experimental situation that involves human beings?
The latter. I’m objecting to the problem of measuring human attitudes when the humans are self-reporting them.
You clearly understand that measuring what humans say they think has some serious problems. That’s why I try to distinguish those types of survey experiements by calling them “measuring opinions about things” – I don’t see how you can get from the response to believing that the response actually reflects the subject’s real beliefs. After all, the reason we use the dilemmas is because many subjects may not know their beliefs.
Hm, let me try that as a way of explaining it: the reason we ask these hypotheticals is to get at people’s beliefs about something that we want to be sure they have never thought about with any depth, or experienced before. So, what we’re going to collect is the subjects’ response to something that we are sure they have never thought about with any depth or experienced before. Do you see a problem, there?
Collecting humans’ opinions about their beliefs is like nailing jello to a wall. You can probably do it if you alter the jello so much that the question then becomes “is that jello?”
Caine says
Mike Smith:
Oh, bullshit. The trolley problem is not in the least bit illuminating in guiding human behaviour in practical reality. Or any other reality. Please, show me where it has been applied in practical reality that has made a difference. A good example of what the trolley problem does is shown in this thread – people taking it much too seriously, often literally, and saying some very stupid shit.
See? Very stupid shit. At least you aren’t alone in that one.
Krasnaya Koshka says
I’ve always considered this sort of “thought experiment” to be so far removed from reality as to be ridiculous. The one in the OP makes sense. Gay people or guns? It’s a good, and timely, question.
If you have time to ask these hypotheticals, you’re likely privileged. In my experience, only white cis-het men ask philosophical questions.
Here is my thought experiment to you: my father is drunk again and he wants to kill either my mom or my great grandmother. My great grandmother stays with us every Friday and brings us the only food that is not potatoes the whole week. My mom is practicing typing to better herself after fifteen years of being beaten by my father. She’s focusing on the future.
My Oma (great grandma) already has a job and brings money to my family. But my Oma cannot drive (she has a stiff leg from the war in Germany) and needs my mom to drive her to our house. Should my father kill my Oma, or my mom?
Neither one was killed but one was sent to the hospital. So, which one?
Marcus Ranum says
PS – I’m much less dismissive of social science experiments which measure people’s actual responses to actual situations. When that happens, then you’re – at least – not dealing with the additional obscuring layer of the subjects’ opinions about the situation. So if you’ve got an experiment in which you have subjects play a card game for that lunch money, then I think you can legitimately make some inferences about “Certain card playing behaviors in ivy league undergraduates.” It’s still highly problematic to go from be overt behavior to a certainty of internal belief, but I do think that may be possible in certain very carefully constructed experiments (like the title I offered above)
Feynman, who was probably even more of a social sciences basher than I am, did a great piece describing a rat-running experiment, in which he started asking them whether the rats were really learning how to navigate the maze, or whether they were learning which window in the room they had to head toward, or whether the floor was sloped, or whether the researchers were standing by the maze exit, etc, etc., yadda yadda. Feynman. I’m pretty sure that he was mythologizing the incident considerably (as he was wont to occasionally do) because I imagine most social science researchers would never let Feynman near their experiments. And, now that I think of it, that was probably where my brain dredged up the “cargo cult science” slur I threw earlier – that was one of Feynman’s.
Marcus Ranum says
PS – if any of these social science experiments and the resulting models have predictive power, then, let’s talk. Unless, of course, the prediction is: “ivy league undergraduates like $25 lunch money.”
Let’s go from a hypothetical moral framework, to a prediction of how people will behave, followed by a measurement that confirms it. Because the problem with cargo cult science is that occasionally an airplane really does land.
Caine says
Krasnaya Koshka:
Oh, I wish I could manipulate reality, because then it would be your drunken father in the hospital.
slithey tove (twas brillig (stevem)) says
I’ve always considered that conundrum is equivalent to:
answer being: N.A. (Not Applicable). Meaning the question is intended to be a contradiction in terms, to see what conclusions jumps to mind first, and see if introspection occurs to re-examine the proposed answer. Simply: No answer; as an answer to this question is not even possible. reword that answer with synonyms, rinse and repeat.
….
the trolley puzzle is (I think) intended to not get an answer, but is a way to encourage introspection about the result of each possibility. Not to find the “trick” solution.
Even so, I’ll stick with offering a trick. Without any details: just “smash the trolley to prevent either of the two offered options.” as in: if
(1) kill_1_loved_one,
vs
(2) kill_100_strangers,
I’ll answer:
(3) none_of_the_above ; while backing out of the room. *shrug*
Lady Mondegreen says
The trolley problem was originally posed by philosopher Phillipa Foot. It was a thought experiment useful for examining various schools of ethics and their consequences and implications. Later, cognitive scientists got interested. (What I remember is the observation that in one of the later problems, most people would sacrifice one person to save five-or however many-by pulling a switch, but NOT by pushing someone off a bridge to stop the trolley.)
Anyway. The TPs aren’t useful for figuring out what an individual would do in a particular contrived situation, but that’s not what they were meant to do.
screechymonkey says
Lady Mondegreen @32,
But do we really learn anything useful from studying the “consequences and implications” of various schools of ethics on such an artificial and highly constrained hypothetical scenario? I mean, if Ethical Rule X says that you should take Action 2 in the Trolley Hypothetical…. so what? Even if Action 2 “feels” right or wrong in our moral intuition, is our moral intuition really equipped to deal with such a contrived scenario?
The fact that the later variants have shown that people’s responses are highly sensitive to changes in the scenario just seems to confirm that it’s not a useful tool. A hypothetical that just shifts the debate to quibbling over the details of the hypothetical scenario seems like a waste of time. (I’m reminded of the “violinist hypothetical” in support of abortion rights — all that happens is that the anti-choice folks insist that having sex is the equivalent of agreeing to be hooked up to the violinist, so the conversation just ends up being the same only at a level of abstraction.)
For instance, I think that the reason why many people who throw the switch in the baseline scenario balk at the “pushing the fat man onto the tracks” scenario is not because of some abstract distinction between action and inaction, or some illogical inconsistency on their part, but simply because the scenario is getting increasingly ridiculous. And in the real world, there are always people trying to insist that every scenario is a trolley scenario (“We’ve got to either torture this captive, or the nuclear bomb will kill millions! Now let me proceed to handwave away all objections to the effectiveness of torture and the reliability of information obtained that way, and the contrived nature of this scenario, deny the existence of any viable third options, and insist that you choose either to support torture or allow the deaths of millions!”) So objecting to the premises and quibbling over the details is a pretty damn important part of having a working moral compass.
Vivec says
Reminds me of an ethical dilemma someone in here tried to give me irt voting, where it was like “I call you up and say im going to shoot people. Either I shoot ten people, get away, and teach more people to do mass shootings, or I shoot 100 people and turn myself in afterwards.”
Then when I said “I hang up the phone and call the police”, they tell me I’m cheating.
screechymonkey says
According to Wikipedia:
Sounds like a lot of professional philosophers “cheat” on the question, too. (Though, to be fair, I doubt you could get more than 75% of philosophers to give you a straightforward answer to any question.)
ledasmom says
The only real point of trolley problems is to make people feel horribly guilty about killing hypothetical people. Reminds me of “The Cold Equations”: Here’s this ludicrous situation which has been made possible by extreme contrivance, for the purpose of feeling all sad and duty-bound to kill people! Science!
My preferred answer is to have better testing and standards for trolley brakes and better ways of keeping people off the damn tracks, so the situation never happened.
Lofty says
The trolley problem: If you hadn’t been so busy creating a pseudo moral dilemma you would have remembered to secure the hand brake on the trolley.
Marcus Ranum says
Lady Mondegeen@#32:
The trolley problem was originally posed by philosopher Phillipa Foot. It was a thought experiment useful for examining various schools of ethics and their consequences and implications. Later, cognitive scientists got interested.
Ah, yes, the “virtue ethics” philosopher, whose system of constructing moral systems is one big circular argument. I hadn’t realized she’d invented the trolley problem. Yuck. Then that puts it solidly in the camp of Important Philosophical Hypotheticals That Illustrate Problems with Philosophers’ Opinions About Morals like the old “if lying is always wrong Immanuel Kant, are you saying that if nazis come to the door and ask if there are any jews hidden, you should always answer truthfully?” Such philosophical hypotheticals do more to spawn moral nihilists than any amount of Nietzsche and tequila ever can (speaking from personal experience)
I didn’t realize it was philosopher’s silliness that social scientists took seriously; I thought it was an invention of the social sciences. It seems odd they need to import silliness from philosophy: they have plenty of their own to work with.
garysturgess says
Tangentially related, perhaps – there is a trend in fiction, particularly action movies, where something akin to the following happens:
* One of the hero’s comrades, loved ones, or some random innocent is wounded, held hostage, or otherwise rendered unable to escape a dangerous situation without assistance.
* A rescue attempt by the hero will likely cost the lives of several of his remaining comrades, loved ones, or random innocents.
* One of the hero’s advisers tells her or him to abandon the threatened/wounded/kidnapped comrade/loved one/innocent.
* This adviser is painted as the coward/villain/turncoat; the hero refuses this advice, and proceeds with the rescue attempt anyway.
Now I’ll admit that a lot of people will react like the hero in this scenario. That’s not my point. My point is that it has always seemed questionable to me that this is the right choice, such that the adviser’s opinion is treated as the wrong choice. I’m not even sure there is a right or wrong here.
The assumption always seems to be, “If you follow that advice, your friend/loved one/teary eyed child will die!” But what about the rest of the friends/loved ones/teary eyed children? Is it really morally unquestionably right to let the world burn as long as your friend/loved one/teary eyed child survives? I mean, perhaps it is – but it’s surely not as black and white as all that.
The new Doctor Who series has several examples of this (eg Rory prepared to risk all of existence to save Amy – “love is a psychopath” is a Stephen Moffat quote), but it’s prevalent in tons of fiction. I’m not talking about “do we torture this guy to save the city” type dilemmas – I’m talking about “do we sacrifice X number of good guys to save Y number of good guys”, where X >= Y. At the very least, I wouldn’t have thought that the answer of “yes, always!” was pretty much always the correct answer.
hotspurphd says
In my experience the trolley problem can have an important effect on behavior. In 1999 I read the following article in The New York Times by Peter Singer, The Singer Solution to World Hunger.
http://www.nytimes.com/1999/09/05/magazine/the-singer-solution-to-world-poverty.html?pagewanted=all
The result of my reading this article with a runaway train situation was that I started to contribute 20% of my yearly income to charities helping the poor.
Singer writes:
“Bob is close to retirement. He has invested most of his savings in a very rare and valuable old car, a Bugatti, which he has not been able to insure. The Bugatti is his pride and joy. . . . One day, when Bob is out for a drive, he parks the Bugatti near the end of a disused railway siding and goes for a walk up the track. As he does so, he sees that a runaway train, with no one on board, is running down the railway track. Looking further down the track he sees the small figure of a child playing in a tunnel and very likely to be killed by the runaway train. He can’t stop the train and the child is too far away to warn of the danger, but he can throw a switch that will divert the train down the siding where the Bugatti is parked. Then nobody will be killed–but since the barrier at the end of the siding is in disrepair, the train will destroy his Bugatti. Thinking of his joy in owning the car, and the financial security it represents, Bob decides not to throw the switch.
We would all agree, I presume, that Bob did something horribly wrong. But, asks Singer, are we not all in exactly the same situation relative to the world’s poor? It has been estimated that a $200 donation to UNICEF or Oxfam America will save the life of a child. Who among us cannot spare $200–a far lesser sacrifice than we think Bob should make? ”
Persuaded that I could save a life for only $200 and doing such would not materially affect my standard of living, I followed Singer’s suggestion and gave 20% of my income to a Haitian charity and continue to this day.
Clearly these trolley problems can affect behavior. I don’t know about their utility in philosophy but some of the points made about what they measure in experiments are obvious and elementary. Any psychologist knows that they are not like measurements in physics and don’t claim that they are. That doesn’t mean that they are not useful and can’t be used as tools. Those who construct psychological tests have sophisticated statistical tools to measure the reliability and validity of test instruments- of so I learned In graduate school in psychology.
consciousness razor says
Okay, that seems to be saying that it’s bad to rely in any way on any self-reporting or introspection. I would agree that it can be unreliable for making predictions about what actual human behavior will be. But I don’t have any general problem with simply using your brain and saying what you think, nor do I have some general problem with doing social science experiments on human subjects (despite the numerous ways those experiments are going to be limited). Short of solving lots of equations for every particle in your brain and the room you happen to be in, since that won’t be a practical method any time in the foreseeable future, you can get some predictive information and learn that you don’t need to be utterly certain about the results.
But your persistently wrong assumption (we’ve been through this before in other threads), that it is to be thought of as an experiment in social science and not moral philosophy, is really the bigger issue. To start with, I know I wasn’t exposed to trolley problems by being paid $25 as an undergrad to fill out a psych/sociology survey, and neither were a lot of others I’m aware of, so your assumptions about the sampling methods and their statistics are apparently flawed. But those are small potatoes compared to what I’m actually concerned about.
Suppose somebody (maybe not you) was interested in attempting to formulate a consistent set of moral reasons for acting certain ways in a wide variety of circumstances. What would that set of reasons look like, what kinds of valid exceptions might there be, where do problems arise with this particular theory, and so forth? How should someone go about doing something like that? It seems reasonable to begin at the beginning, meaning that you try to answer some of these things by first articulating what your theory says as clearly as you can.
I guess you could try to do it in a completely abstract way (no stories, no scenarios, no concrete people/places/things, nothing to picture or imagine, etc.); but however that would go if it would work at all, you probably wouldn’t be doing so great at getting others to understand what you’re trying to communicate. So, you try to do something a little more concrete, which only abstracts away the details you take to be irrelevant to the moral questions you’re interested in asking. And if you have different people with different views, they may be able to use that very same thinking tool to formulate how their own theory works, how it’s different from yours in its conclusions or in its reasoning. Of course, lots of these may be garbage (presumably, they’re not all correct and they’re not all meaningless). And of course, nobody has anything like infallible knowledge about their own mental states, whether actual or hypothetical. But I don’t think that invalidates this entire process, and it’s not clear how you expect any process to work at all, if you’re not going to admit anything that anybody says as some kind of shaky evidence of what they intend. It’s not supposed to be giving you a solution to a problem, not even a way of attempting to solve a problem, nor is it a surefire way of reading people’s minds or predicting the future. It’s a way of articulating what specific issues or problems or dilemmas that you think there are. Once there’s some clarity about that, you can proceed to do other things with it, or just move on with your life.
Steven Clinard says
You sillies. Doesn’t the Bible have all the answers to moral questions? God is the source of all objective morality, so ask him.
I think it’s in the Parable of the Runaway Chariot, or probably in Leviticus or Psalms or ???
Anyhow, I seem to remember it was a bit confusing since you had to determine if any of the women were menstrating first. It was mandatory to shove an Amelekite on the trackson, even on the Sabbath, and even if nobody was otherwise in danger.
Hope that helps clear things up.
khms says
Somehow, I don’t think #42 is The Answer.
In any case, using these trolley problems to analyze theoretical moral systems I could see. Using it in any kind of survey or test, however, seems to me to demonstrate a fundamentally broken concept of how our moral intuition works.
Like so many other products of evolution, our moral intuition is one huge mess that mostly works out sufficiently well for those situations we are likely to find ourselves in during our normal life. However, in unexplored situations, I expect this intuition to be highly path sensitive – in other ways, what answer it produces depends critically on the (mental) way one arrives at the point where that answer is produced. Or in other words, the results are very unreliable – unless what you want to explore is exactly how humans react in situations where their moral intuition fails (if you use it long enough, it produces several mutually incompatible answers). In which case, you had better be prepared for people who want to answer “none of the above”.
hiddenheart says
Something science fiction author and critic Samuel Delany said about genres seems relevant here. A lot of criticism focuses on the edge cases: how far can you get from what anyone expects in genre and still be in it? But in practice, people who like a genre aren’t trying to get away from it! There’s much less criticism, in many fields, aimed at establishing the features that readers (and writers) are drawn to: what constitutes the heart of a field, what do you expect to get when you pick up a work of a particular genre, what do you want?
I suspect that a lot of our moral intuitions work similarly, building up things we actually do, or see others do, fairly regularly, rather than down from extreme cases that maybe nobody ever actually encounters at all.
I guess that’s kind of a big “what @khms said in #43”. :)
briquet says
Thinking through the trolley problem and similarly contrived hypotheticals certainly convinced me I’m not a utilitarian.
Like experiments (which are often equally contrived), they have the ability to focus on fewer single variables. I might look at normal situations and think the rational decision making rule I have is “help the greatest number” but if I refuse (even hypothetically) to kill a fat man or cut up a homeless person for organs when it could save lives, clearly that’s not true. Doesn’t matter that this isn’t going to come up directly. It does mean I’m more skeptical of moral justifications for other “help the many, hurt the few” cases as well.
Per Lady Mondegreen @32, I never thought the real point was to learn about individual behavior.
@39, good call on the recent prevalence in some genre fiction. I hated this stuff in the Dark Night, for example. It’s bugged me because aesthetically and according to fans it’s often framed as if it’s (1) mature and deep and (2) gritty and realistic compared to standard genre stuff, when the fact that these are basically scenarios suitable for a teenager taking a philosophy class, and neither realistic nor providing novel insights.
Marcus Ranum says
@Consciousness Razor@#41:
I hope it’s OK but I’m going to rearrange my responses to your comment, because I think I see where some of the disconnect between us is happening. You seem to me to be arguing in good faith and I think I am too.
But your persistently wrong assumption (we’ve been through this before in other threads), that it is to be thought of as an experiment in social science and not moral philosophy, is really the bigger issue.
I think you just identified the crux, and we’ve perhaps been arguing about completely different things. My undergraduate degree’s in psychology, and I spent a lot of time in that dungeon; you can see the scars still. When I am reacting to these topics, I am reacting to things like this.
My methodology: I found that link by googling for “social science experiment trolley problem” and it was the first link that came back that appeared to point to an academic paper. I made no further effort to select it and began to read.
What I am concerned with is that, in fact, people are using things like the trolley problem to try to make some kind of inferences about behaviors. And that there is a thing called “cognitive economics” that I have never even heard of. And that there are “behavioral heuristics” whatever those are. I do appreciate the author’s admission that: there is no immediate way to translate experimental data into the conduct of a representative economic agent
Which sounds to me suspiciously like saying “we collected some bullshit” but clearly it was worth publishing about.
The objection I’ve been making here (and, as you said, in other threads) revolves around the difficulty of making claims about knowledge regarding what people think and believe. Because, there is a problem that people don’t sometimes know what they believe, or think, don’t always think the same way, and don’t necessarily report what they think honestly – especially if it’s around an issue that may be socially stigmatized like running people over with trains. I think we’ve argued similarly about the same issue with respect to moral systems in which there is a claim that people believe X, because they say they believe X. To that I can only respond that that’s a lot like saying that you know what Donald Trump truly believes, because you saw him say such-and-such on TV. Donald Trump is a rather odd example of exactly what I am talking about: you cannot claim to be making meaningful scientific inferences about moral frameworks based on what Trump says. Unless, as I argue, the moral framework you’re deriving from Trump’s utterances is “roll 2D20 and look on the Trump belief table” Undergrads that are being paid to respond to social science experiements are maybe more “reliable” than Trump, but they’re also a self-selected sample and someone studying them is also only able to come to grips with what they chose to report and how.
Continuing with the paper I cited:
Yes. There are researchers attempting to do something appearing to be science regarding peoples’ responses to the trolley problem. I am not complaining about a straw man. People are doing this, and are publishing papers about it – apparently. If I could find one in a second on the top page of a google query, that makes me suspect that there is a lot of this form of research being done.
I have been careful to focus my objection on what I believe is the problem – not the trolley dilemma, but rather how social scientists appear to be attempting to use the trolley dilemma. If Phillippa Foot were alive today I wonder if she would feel as the original inventor of “IQ tests” might, having seen how what was a fairly straightforward experiment has turned into a morass of cargo cult science. The trolley problem is a philosopher’s game, and philosophers are welcome to it. But what we appear to see is social scientists attempting to use a philosopher’s jerkoff kleenex as a serious tool.
I swear upon my stack of Sisters of Mercy bootlegs from the 80s that I did not search and cherrypick this particular article. Because it’s too perfect, I can imagine you think I’m setting you up.
But that is exactly the stereotype behavior I have been complaining about except instead of giving the undergrads lunch money they gave them course credits on a requirement. Could there be any smidgen of sample bias present in a class on “experimental economics”?
It does appear to me that there are people who are trying to infer things about underlying principles of how people behave in economics based on this kind of experiment. Now, I would argue that scientists would do better measuring things about how people actually behave in actual situations. Intellectually, that aligns me with the “skinnerian” psychologists – B.F. Skinner tried to get psychology out of stuffing its head up its own ass by making grand inferences and sky-castles about motivational structures and said: just measure actual behavior. The problem with just measuring actual behavior is it’s not as revealing as making up imaginary motivational structures, then applying them, appears to be.
Now, I’m going to drive-by on a few other points of your comment. Again, respectfully:
Okay, that seems to be saying that it’s bad to rely in any way on any self-reporting or introspection.
I don’t see it as an issue of good or bad. I see it as a question of whether you can make any claim of knowledge based on self-reporting or introspection. If you acknowledge the difficulty of being sure to what degree you can rely on self-reporting or introspection, you have a measure of the degree to which I doubt the data can be useful for anything other than pie charts.
The close of the paper I referred to’s abstract is truth:
“Then, why are you doing it?” comes to mind.
your assumptions about the sampling methods and their statistics are apparently flawed
Then we have a disconnect when the sampling methods I am “assuming” are nearly exactly the sampling methods described in an academic paper published on exactly this topic.
Not trying to get personal here, but are you a philosopher, or a social scientist? Perhaps the inputs/outputs of the trolley problem are very different among philosophers than among social scientists. As I said in my earlier comment, the trolley dilemma may be a useful way of developing contradictions in consequentialist arguments, much as the “if telling a lie is always bad, are you saying that if a nazi came to your door asking if there were any jews hidden, you’d always answer truthfully, Immanuel Kant?” That’s a valid way of demonstrating a problem with someone’s philosophy; it’s not an attempt to harness that into something resembling an experimental result. I have no problem with that process, at all, and if I appear to be arguing against that, I have been doing a very bad job communicating and I apologize.
Suppose somebody (maybe not you) was interested in attempting to formulate a consistent set of moral reasons for acting certain ways in a wide variety of circumstances. What would that set of reasons look like, what kinds of valid exceptions might there be, where do problems arise with this particular theory, and so forth? How should someone go about doing something like that? It seems reasonable to begin at the beginning, meaning that you try to answer some of these things by first articulating what your theory says as clearly as you can.
I don’t know how someone should go about something like that. :/ I agree that it would be desirable to try to enumerate and evaluate people’s moral perceptions. I think that another place where you and I disconnect is that you appear to be arguing that one might articulate a theory first, and then collect reported perceptions, after. That’s definitely better than what I believe I am railing against – which is a process of collecting the data (badly) and then trying to infer theories from the data, as is apparently going on in the paper I reference. I’m not trying to get you to defend that paper, it’s just an example. Having a theory then doing some experiments to attempt to disconfirm it and see if it stands, that’s – better.
I guess you could try to do it in a completely abstract way (no stories, no scenarios, no concrete people/places/things, nothing to picture or imagine, etc.); but however that would go if it would work at all, you probably wouldn’t be doing so great at getting others to understand what you’re trying to communicate.
I would think that ideally, we’d measure real behaviors in the wild, as Skinner would have us do. Of course, Skinner failed – all he was able to measure was that pigeons in a box will work for corn. Duh. I guess in a sense your theory that you’re testing would be the framework of those abstractions that you’re going to test and then it’s a matter of devising an experiment that will measure actual behaviors in that actual situation. If one does that, then one is not measuring the subjects’ behaviors in a hypothetical situation and – more importantly – one is not querying the subject about their self-perception of their beliefs about their response. You measure what they did.
If someone wants to run the trolley experiment by putting people in a VR rig with surround sound, and tell them “trigger warning: you could see horrific images of blood, death, and destruction depending on what you choose.” then popping them into a live-action scenario … better. You’re still potentially going to be lied to, deliberately or inadvertently, but … better. Unfortunately for psychology, certain “experiments” really shit the hot tub in terms of human factors ethics in experiment design – I wouldn’t expect any psychology department except for at the CIA to run the VR surround-sound version of the trolley experiment. (And they’d pay for it and use it on prisoners at gitmo)
But I don’t think that invalidates this entire process, and it’s not clear how you expect any process to work at all, if you’re not going to admit anything that anybody says as some kind of shaky evidence of what they intend.
In honor of the election year coming up, yeah, I do think self-reporting anything is a problem. Would you accept self-reported data about ethical experiments from Donald Trump? I wouldn’t. Not because I think he’s a compulsive liar, but because I don’t think Trump knows what the truth is. That’s a problem if my experiment depends on what Donald Trump claims to think the truth is.
Have I just invented “argumentum ad Trump”? You are defeated. (just kidding)
It’s a way of articulating what specific issues or problems or dilemmas that you think there are. Once there’s some clarity about that, you can proceed to do other things with it, or just move on with your life.
I agree. That’s why I am careful to say that what we can learn from these experiments is people’s statements opinions and beliefs about things. We can learn enough to say “9 in 10 people say that in the trolley experiment they think they would let the 10 strangers die” In that case, yeah, but that’s like stamping “genuine replica” on your paper’s abstract.
Steve Caldwell says
The “trolley problem” is starting to have real-world applicability. The programmers writing the code for driverless cars have to weigh decisions like “does the car kill the driver to save five pedestrians?”
“Driverless cars are colliding with the creepy Trolley Problem”
https://www.washingtonpost.com/news/innovations/wp/2015/12/29/will-self-driving-cars-ever-solve-the-famous-and-creepy-trolley-problem/
Rob Grigjanis says
Someone close to me once said that they would easily resist extreme torture to protect the people they love. I felt like saying “bullshit”, but what good would that have done? People love to cling to their delusions, and not having met someone who has actually been tortured makes that a lot easier. And, incidentally, it makes it easy to be dismissive of those who have been tortured. Hypothetical bullshit is hypothetical, and bullshit.
Jake Harban says
Ah yes— “Shoot 100 people and turn myself in” is Trump; “shoot 10 people and teach more people to do mass shootings” is the “lesser evil” Clinton.
No, “hang up the phone and call the police” is vote Stein.
Under the scenario you are referring to, “hang up the phone and call the police” means the shooter kills 100 people (the “greater evil”) option. The point of that analogy was to demonstrate the absurdity of claiming this makes the 100 deaths your fault for not choosing the 10-death “lesser evil” and to demonstrate that without the psychological distance between action and consequence that an election provides, people would be much less likely to endorse the “lesser” evil over a non-evil option.