What If\'s

Any one who has read (and remembered) my posts and comments here on philosophy knows that I am rather partial to deontological philosophy. That said, I must sometimes point out flawed arguments. In this particular case, a flawed argument against Utilitarianism.

"What if..." begins the argument, whereupon a fantastical world is supposed and the laws of nature are suspended. In this case Roger Kimball writes, "some mad scientist has figured out a way to bring peace, prosperity, and general happiness to the whole world...[but] this brave new world required the yearly sacrifice of one innocent person, chosen at random." He notes that many people's reaction is disgust and horror. They sense something terribly wrong. There is, but it isn't necessarily some deontological law, or a failure of Utilitarianism.

Rather, this hypothetical mad scientist is, in the real world, a shaman as in "sham man". People presented with this scenario sense that the forces of nature that cause unhappiness, war, and privation aren't readily solvable, and in no way requires a human sacrifice. Some may even recognize that there have been many men in the past who have offered this very deal, and did not deliver the goods. Utilitarianism deals with the real world as it is, as such, for any hypothetical situation to successfully argue against Utilitarianism, the laws of nature must not be violated in the hypothetical. Roger's argument smacks of violating physics, biology, and human nature.

BTW: Here's a hypothetical: there is a nuclear bomb in the lobby of a Manhattan tower. The bomb has been rigged such that anyone can easily disarm it, unfortunately, they will die doing so. Do you force someone to disarm it? do you accept the deaths of millions? Explain.

Share this

Jonathan, My principle is

Jonathan,
My principle is always to act according to self interest. If theft or murder are in your self interest then you should commit those acts. However, in the civilized world theft and murder are rarely efficient solutions. Risk assessment is one of those things a man has to work out for himself. In uncertain situations (i.e. most of the time) it is best to stick to the moral rules. It is also reasonable to stick to the rules when assessing future consequences is too time consuming. Some moral rules are more useful than others but I'm sure we all know that already.
I don't maintain that the rule of self interest always makes for an easy decision. I don't maintain that you will never be mistaken about your interests. I only maintain that all other rules are worse.
Gavin.

Thea, _It seems to me that

Thea,

_It seems to me that the very difficulty of trying to come up with a case in which utilitarian values are palatable is a pretty strong argument against it. I’m an engineer, not a philosopher, but for me that fact that I have never heard a remotely plausible case in which the utilitarian answer has been even slightly acceptable to me._

Actually, you have it backwards. The difficulty lies in coming up with some sort of case in which utilitarianism _doesn't_ work. All moral theories yield the same answers in about 99% of the cases out there. If a theory doesn't accord with most of our intuitions, then it's usually just dismissed as clearly wrong.

The sorts of ticking bomb cases are usually supposed to show that the answer a utilitarian offers is radically different from our moral intuitions about what we ought to do. David's complaint (which I echo) is that showing utilitarianism to be utterly at odds with our intuitions is actually rather hard to do. That's what most cases have difficulty in showing. The more realistic and detailed one makes the example, the more our intuitions start to line up with the utilitarian answer. The problem with most examples is that they offer really quick, completely underdeveloped cases in which there are really lots of different possible options and then blithely claim that the utilitarian must do X even though our intuitions say not-X. The problem, though, is that in most such cases, the utilitarian will also say not-X.

_I think that any utilitarian living in a libertarian society would have to realize that this would not be the case. If I found my daughter shot in the head, and the killer told me that someone had called him up at work and to say that New York would be blown up if he didn’t shoot her, I would not be too understanding._

Well, yes to the first part certainly. But it's pretty likely that most utilitarians would say that living in a libertarian society would not actually be utility maximizing. (Don't start yelling at me Patri, Scott and Micha; I know that there are some exceptions!)

As for the second part, I'm right with you, too. Clearly I'm not at all happy about having my son randomly killed to prevent NYC from being blown up. I have lots of hardwiring that makes me react badly to the idea of someone harming my child. And, as I said in my post to Rick, there's a good utilitarian reason for reacting in just that way. So I would accept, at some intellectual level, that the person who killed my son probably did the right thing even though it's not so likely I'll invite him over for a beer afterwards.

As for the case you give,

As for the case you give, it’s a pretty good one (especially after Eric’s changes). It’s actually quite hard to design these such that they are both realistic and foolproof. Most of the responses so far demonstrate the latter. That is, to get a true dilemma, it must be the case that you eliminate any options other than ‘do X and get Y’ or ‘don’t do X and get not-Y’. Judith Thomson’s trolley examples and Bernard Williams’ Jim and the Indians cases both have this feature. Your case, as currently presented, doesn’t, but it could.

It seems to me that the very difficulty of trying to come up with a case in which utilitarian values are palatable is a pretty strong argument against it. I'm an engineer, not a philosopher, but for me that fact that I have never heard a remotely plausible case in which the utilitarian answer has been even slightly acceptable to me.

If one person must die to save millions, and there are no volunteers, then I'm not sure how much of a tragedy the loss of that society would be.

If I were at work and someone called me on the phone saying they would set off a bomb in New York unless I killed the guy in the next room I would call security and have them save the guy out of the broom closet. The thought would not cross my mind that the voice on the phone could possibly be legitimate.

The problem for a utilitarian in developing these scenarios, and I think with what I know of the idea is that we don't have full knowledge. The idea of it being morally acceptable to make calculations based on expected outcomes and not being responsible egregious acts committed based on the belief that it would maximize utility in the end makes me feel physically ill. This seems like it has been the theme of every major genocide and most other governmental acts of aggression; that it's for the greater good.

Utilitarian solve this apparent dilemma by recognizing that moral praise or moral blame are themselves assigned on utilitarian grounds. So you might well perform some action that fails to maximize happiness, but it may still be the case that the rest of us don’t blame you for your choice; indeed, we may even praise your choice.

I think that any utilitarian living in a libertarian society would have to realize that this would not be the case. If I found my daughter shot in the head, and the killer told me that someone had called him up at work and to say that New York would be blown up if he didn't shoot her, I would not be too understanding.

I do see how utilitarian evaluations can be useful in determining the course of action in some limited circumstances, but people must always be accountable for the violations of the rights of another to be a truly free society.

David, I see why you didn't

David, I see why you didn't allow all of my modifications, but I do think the absolute moral dilemmas are interesting, even if they are fairly impossible.

In all cases, you should do

In all cases, you should do whatever you judge to be in your personal best interest. Non-initiation of violence is a very good maxim, but it is always secondary to self-interest. There are very few real world situations where coercion is actually worthwhile in the long run. This is why the moral rules are set up as absolutes. However, self interest can, and should, override in extreme situations.
Many peoples' intuitive response to these hypotheticals is the correct one. They always have some absurd pseudo-moral justification though, because they don't dare to openly advocate self interest as a criterion.
Even Ayn Rand drew back from the brink of saying it is correct to murder an innocent person when essential to save your own life. She was wrong to do so. 99.999999% of these situations have a non-coercive solution which should then be preferred. I haven't found it necessary to kill anyone yet :)

I disarm the bomb myself and

I disarm the bomb myself and accept my own death to save millions. I would not coerce anyone else to die in such a situation. The end of "Armageddon" is the perfect example. Bruce Willis' character could not stomach the thought of sending someone else to their death, even to save the entire human race. So he chose his own death instead. Assuming that there isn't enough time to build the magical robot to disarm the bomb (and the bomb could easily be rigged so that only a human could disarm it by using sensors that detect whether a mass between 45 and 120 kilograms with a radiated heat of at least 35 degrees celsius was present) the only ethical choice is your own death. Assuming the magic robot could be built but the only person who could build it refuses, then you disarm it yourself and accept your own death.

This isn't a particularly difficult ethical dilemma. A much more difficult dilemma would be one where millions will die unless one is sacrificed to disarm the bomb but there is some constraint that prevents me from choosing to be the one who dies. This would leave me in the position of deciding whether to cause the death of millions through inaction or coerce one person into dying. Assume the bomb is rigged in such a way that the person who knows how to disarm it cannot disarm it themselves. Perhaps the one person who knows how is too far away and it will explode before they can get there. The option left is to have that person communicate with the person who will actually disarm it by cell phone. The person physically touching the arming mechanism is going to die, there is no possible way of preventing it. No one is willing to volunteer to do the work. What do you do?

Rick, _Isn’t the problem

Rick,

_Isn’t the problem of utilitarianism that the choices we make for the greatest good are severely limited by what we know or don’t know. What we think may be the greatest good turns out to be the exact opposite?_

Yes, you're exactly right that utilitarians are limited by their inability to predict future consequences with certainty. That's why most utilitarians will argue in favor of maximizing _expected_ utility rather than actual utility. Expected utility is calculated using probabilities. So it might turn out that some action A doesn't in fact maximize utility even though, from the perspective of the person performing the act, A was the right thing to do.

Utilitarians solve this apparent dilemma by recognizing that moral praise or moral blame are themselves assigned on utilitarian grounds. So you might well perform some action that fails to maximize happiness, but it may still be the case that the rest of us don't blame you for your choice; indeed, we may even praise your choice. Why? Because _usually_ doing as you did would have good consequences, and punishing you for doing what it's usually best to do would have the effect of making it less likely that people will, in the future, continue to perform the act that is usually best.

As far as your other examples go, the utilitarian will give different answers depending upon how, exactly, you draw the example. So I might be obligated to kill my family member to save millions. That doesn't strike me as the wrong answer, though it would, of course, be terribly tragic. It might also be the case that I would, at the end of the day, be unable to do it. That's not clearly evidence that I shouldn't, only that I am weak-willed. (Incidentally, there might well be good utilitarian arguments for having that reaction, too. The claim here would be that there are better consequences all around when people have stronger feelings for family members, particularly since those stronger feelings are hardwired into us already.)

"do you accept the deaths of

"do you accept the deaths of millions? "
The real question is whether you accept your own death or force someone else!

One can always defuse the bomb using a robot! Let's say only one scientist can design such a robot and he/she is refusing to co-operate. Should you force him/her?

Roger’s argument smacks of

Roger’s argument smacks of violating physics, biology, and human nature.

Unless you take into account that soylent green is made of people!! It's people!!!

Let’s say only one

Let’s say only one scientist can design such a robot and he/she is refusing to co-operate.

Then let's get an engineer. Plenty of them, and they tend to do better at building practical contraptions than scientists anyway.

Using a robot is an excellent alternative to the choices I have presented. But supposing there is not enough time to build and debug a robot, and there are no existing robots near enough to get there in time...

Joe, "Those of us who call

Joe,

"Those of us who call ourselves utilitarians are usually those who are willing to accept a much lower body count."

The question of 1 person dying to save millions is easy one. But what if the ratio were 2 to 1? The person you had to kill was an innocent unknown, but the people who would die if you didn't kill the unknown were people you knew and loved?

Which choice maximizes happiness or the greater good?

Let's complicate it even further. The person you are slated to kill happens to be on the verge of discovering a way to save millions (a cure for cancer, say) but you are unaware of this. Normally 2 lives would outweigh 1 life (with the added bonus of personal benefit, the 2 being loved ones)but in this case the calculation of maximizing is limited to your knowledge.

Do you make the choice limited by your lack of knowledge and therefore minimize happiness? Isn't the problem of utilitarianism that the choices we make for the greatest good are severely limited by what we know or don't know. What we think may be the greatest good turns out to be the exact opposite?

My wife’s answer was that

My wife’s answer was that she would order a soldier or police officer to do it, since they had already volunteered to place the group ahead of themself.

Why not go to the nearest hospital and find a terminally ill patient who's willing to check out a bit sooner in exchange for a cash payment to his family?

David, First, thanks for

David,

First, thanks for pointing out that the vast majority of examples used for bludgeoning utilitarians are really lousy examples. It's nice to hear a non-utilitarian finally recognize the point.

It's also fun to see non-utilitarians squirm when someone finally does present a good case. For most people, if the body count gets high enough, our intuitions start to side with the utilitarian answer. Those of us who call ourselves utilitarians are usually those who are willing to accept a much lower body count. That makes it particularly amusing (as a utilitarian)to watch borderline absolutists like Nozick or Michael Walzer trying to cope with the obvious rightness of the utilitarian response when millions of lives are at stake.

As for the case you give, it's a pretty good one (especially after Eric's changes). It's actually quite hard to design these such that they are both realistic and foolproof. Most of the responses so far demonstrate the latter. That is, to get a true dilemma, it must be the case that you eliminate any options other than 'do X and get Y' or 'don't do X and get not-Y'. Judith Thomson's trolley examples and Bernard Williams' Jim and the Indians cases both have this feature. Your case, as currently presented, doesn't, but it could.

Perhaps instead you could try this. Suppose that I planted a nuclear bomb in NYC. I then call you at work (wherever that is; let's suppose for now that it's not NYC) and tell you that I will detonate the bomb in 10 minutes unless you take your letter-opener and stab a co-worker to death with it. Indeed, I've already arranged in advance that the particular co-worker is tied up in the broom closet, so you won't have to worry about not being able to kill the person. If you kill the person in question, I'll disarm the bomb. Otherwise, I'll kill 8 million people in NY and poison much of the East Coast. Do you stab a helpless innocent or allow millions to die?

The answer, I think, seems obvious. While individual rights are important (I think this even as a utilitarian and I think that there are very good utilitarian reasons for thinking so), they aren't trumps. If the consequences are dire enough, rights can be overridden. To think otherwise is to be left holding that, to paraphrase Hume, it's rational to prefer the destruction of the world to the unjust scratching of an innocent's finger. That's a consistent position; it's just pretty implausible.

Random comment regarding the

Random comment regarding the original comment and not the hypothetical: the sacrafice of an innocent to bring peace back to the community, it isn't as far fetched as you might think. From the Girardian point of view, before Judaism and Christianity, that is how things used to work. The extremely simplified version: Societies would come to crisis points and then they would scapegoat an innocent, banishing or killing them, and peace would return. But I'm going off on a tangent so I'll stop.

I think it is completely

I think it is completely reasonable that someone who set up a bomb in such a way would put sensors in place that would ensure there was a human being present. IF I set up a bomb to create a scenario like that (awfully silly, but what the heck) then I would do it that way.

By the way Tom, I agree that it is unrealistic that no one would volunteer. But if someone will volunteer then the moral dilemma of compelling someone to do something that is absolutely fatal to them doesn't exist.

Side note, sacrificing your life to protect the lives of others is not altruism. It is duty. Duty is the group's equivalent of an individual's self-interest.

In any case, work out why it is more ethical to initiate violence against one person than to allow the nuke to explode. Hint: Read J. Neil Schulman's "Collateral Damage and the Libertarian Non-Aggression Principle".

My wife's answer was that she would order a soldier or police officer to do it, since they had already volunteered to place the group ahead of themself.

1. This, too, is an

1. This, too, is an unrealistic scenario.

Change location as you wish. A ship in the S.F. Bay. The hypothetical situation as given does not violate natural laws. (Note that I did not accept all of Eric's proposed changes!)

1. This, too, is an

1. This, too, is an unrealistic scenario.

2. Even if it weren't, you can rest assured that there would be plenty of volunteers to disarm the bomb. Altruism seems to be a hard-wired human trait. Consider the members of the armed forces who volunteer to go in harm's way, many of them for far less money than they could make as civilians.

I would choose to coerce

I would choose to coerce someone to disarm the bomb. Let's see if someone else can figure out why that would be, ethically, the best choice assuming that your ethics are libertarian in nature.

You folks should write for

You folks should write for the "24" TV show.

Me

In the event of the latest

In the event of the latest hypothetical, one of the potential victims (in the danger zone) might be forced to disarm the bomb, since whom ever is selected is doomed either way. This presents another question though: what is to prevent the sacrificial lamb from intentionally allowing detonation, just to spite those that sanctioned his demise?

Ok Eric, I'll grant that you

Ok Eric, I'll grant that you cannot get there in time. Now what?

You think an extortionist

You think an extortionist will be successful 8 million times before getting caught? I agree that we should think twice about giving in to extortion, but to accept it as an absolute principle is as implausible as accepting never violating a right as an absolute.

I don't think I could accept as true an extortionist saying that one must go kill another or 8 million (or even 8 thousand) people will die as a viable threat. It just isn't economic or sensical. If the extortionist is that irrational, then my actions will not have expected consequences with respect to his actions anyway. Therefore I must consider only what I have control over - whether I commit murder, or not. Two or three held hostage, especially loved ones, I can accept. Thousands or millions, forget it.

It’s always going to be possible to construct some sort of scenario in which giving in to extortion will be utility-maximizing.

Yes. Of course, part of that depends on values scales and the details of the scenario. The FBI assumes that money is worth less to the victim's family than the victim is. They also figure that the kidnappers are rational and are demanding (or would accept if they guessed wrong in their demands) an amount that is affordable.

There are other problems, mostly around lack of knowledge. Lack of knowledge effects the situation in two ways - any philosophy figured out by humans is going to have holes in it, or places where it is not quite right, second, we don't know all the important details around any given situation. Attempts at maximizing utility calculations are going to be wrong, we just hope that the error is small. (I believe you already mentioned that!)

David, You think an

David,

You think an extortionist will be successful 8 million times before getting caught? I agree that we should think twice about giving in to extortion, but to accept it as an absolute principle is as implausible as accepting never violating a right as an absolute. It's always going to be possible to construct some sort of scenario in which giving in to extortion will be utility-maximizing. Indeed, that's why the FBI always advises people just to pay the money to the kidnappers. Of course we still try to catch the kidnapper afterwards, but we're not always successful. Even so, it seems rational to pay to get one's child back.

All, I specifically took a

All,

I specifically took a fairly stock "moral dilemma" used by utilitarians as an argument against deontologists, and made sure that it did not violate known physical laws, or violate the empirical evidence of human behavior. This is why I did not accept the "no volunteers available" that Eric wanted, nor accepted that only one person could build the saving device.

Only a handful of people are posting comments, and yet I already have volunteers to "do the right thing". In every disaster many people volunteer to take inordinate risks and even face certain death to help others.

I was not trying for a "foolproof" moral dilemma. I am not convinced that a realistic one even exists. (I realize I set myself up to be wrong here!) Rather, my intention is to show that in most cases both deontological and utilitarian philosophies should end up with congruent conclusions, or if one or the other does not have a clear conclusion, the other philosophy is useful as a guide. In this case - one person should die, but we can do so without violating rights by finding a volunteer.

Joe and Rick,
In the example of a person must murder someone or X number of people will die - if the person murders that someone, will the extortionist use this strategy again? Most likely. How many times do we make this calculation before we realize that we have killed many more times what the extortionist originally threatened? Whether deontological or utilitarian, the answer is the same, do not give in to the extortionist.

Are you saying you should

Are you saying you should always act in your self-interest? Or only in these hypotheticals proposed?

I see why you didn’t allow

I see why you didn’t allow all of my modifications, but I do think the absolute moral dilemmas are interesting, even if they are fairly impossible.

I would not argue that they are not interesting for some, but as a scientist/engineer I am also concerned about usefulness. Once we get away from reality we also get away from usefulness, and start proving either too much or disproving everything. I would not write a paper disproving the existence of black holes by using a different second order term in the contraction equation of Einstien's special relativity. It may be an interesting excercise, but it is fiction.

An absolutist theory follows

An absolutist theory follows a Platonic tradition, that there is an absolute right and wrong and that happiness follows from knowing (and doing) the 'right' thing. Aristotle is non-absolutist, in that each person should find his own 'moderation' to best find happiness, that is that there is a certain amount of subjectivity to leading the 'good life'. Utilitarianism can be (and is) criticized on the grounds that it can not be absolutist without running into contradictions and yet it must be absolutist to be useful at all. I'm not terribly fond of that critique either, but that would be another post.

The real splitting point on utilitarianism and deontology is that utilitarianism is a consequentialist formulation, while deontology is an a priori formulation.

Joe, I know I'm going sort

Joe,

I know I'm going sort of backwards, but it makes more sense for my response.

The sorts of ticking bomb cases are usually supposed to show that the answer a utilitarian offers is radically different from our moral intuitions about what we ought to do. David’s complaint (which I echo) is that showing utilitarianism to be utterly at odds with our intuitions is actually rather hard to do. That’s what most cases have difficulty in showing. The more realistic and detailed one makes the example, the more our intuitions start to line up with the utilitarian answer.

I apollogize, I have misunderstood the point of these hypotheticals. I thought that they were trying to show that since in an extreme case a utilitarian choice does not seem imoral, so these answers are not immoral in other cases as well. Thank you for the correction.

All moral theories yield the same answers in about 99% of the cases out there. If a theory doesn’t accord with most of our intuitions, then it’s usually just dismissed as clearly wrong.

Again, I am not well read on utilitariatism, but my understnding was that it asserts that any indivual is able to decide when it is alright to sacrifice another. For example I would classify any arguement trying to justify collateral damage as a utilitarian arguement. However, this is strongly contrary to my moral intuitions regarding the sanctity of life. The arguements that I have heard from people that I meet who claim to be utilitarian tend to treat humans as means.

The difficulty lies in coming up with some sort of case in which utilitarianism doesn’t work. All moral theories yield the same answers in about 99% of the cases out there. If a theory doesn’t accord with most of our intuitions, then it’s usually just dismissed as clearly wrong.

Okay, here are the two cases, which I don't think are unreasonably improbable, but to which, it seems to me, the moral intuitions of most people would disagree with utilitarian response:

1. There is an elderly homeless man with no family, and all of his friends are either dead or have completely lost contact with him. This man is beaten to death by a sadist who derives pleasure from killing.

2. A person, who very much would like to have sexual relations, slips something into the drink of someone at a party and has intercourse with her while she is past out. She wakes up the next morning with a slight headache, but doesn't know that anything has happened.

A third one that would conflict with my moral intuitions, but possibly not those of many people would be artificially inseminating a comatose woman to harvest the fetus for stem cells, which greatly improve the quality of life of the sperm donor, although the women expressly forbade such a procedure before becoming comatose.

I don't see a clear

I don't see a clear distinction between utilitarianism and absolutism. Isn't a utilitarian just an absolutist whose gold standard is utility, however he chooses to measure it, rather than liberty or equality or what-have-you?

And one can, as I do, embrace absolutism on a practical level for utilitarian reasons. Sure, under certain hypothetical situations devoid of context and ex ante effects, it may be better to take the pure utilitarian route, but the world doesn't work that way. A state with the power to do good has the power to do evil, and rarely does it have the wisdom to distinguish the two. While it's not optimal in a utopian sense, the best realistic outcome can be achieved by adopting the absolute libertarian solution, which is to cripple the state.

I am not sure whether I am a

I am not sure whether I am a utilitarium or a duty person. I am also not really sure what all this has to do with Libetarianism. I believe all definitions of good and evil come from God. If He has chosen utilitarism as a moral structure, great. If not, great too. I think he has for the most part, but that isn't the point I want to make.

In the bomb question, I think the moral judgement would come on the person setting the bomb. The people trying to stop it would do the best they could, but whether someone volunteered or a solider was commanded to disarm it (which isn't that unusual) the judgement of evil would fall on the originator of the situation. I would not force someone to kill themselves for others. Soilders volunteer before the fact and some others might voluteen at that time. I applaud them. But I couldn't force someone to do. I doubt I would have to.