Shame on You

Ta-Nehisi Coates, Rod Dreher and Ross Douthat's three-way discussion of the virtue (or lack thereof) of shame in promoting "traditional" family structure has sparked rather a lot of interesting commentary, including this, from The American Prospect's Adam Serwer:

Conservatives regularly overestimate the beneficial effects of shame. Shame provokes response in the form of impulse, not long term planning. A person who is ashamed isn't going to think, "I'd better get a degree" or "I'd better get married," they're going to think in the short term about what they can do to rectify their sense of self-worth. How do you see people--men in particular--act when they're ashamed? You rarely see them do something like get married or get a fantastic job; usually they're going to hurt or exploit someone, make them feel as low as they do.

This strikes me as pretty seriously overstated. Indeed, I think that liberals and conservatives alike probably understate the beneficial effects of shame. (Though to be fair to Serwer, it does seem to be true that conservatives regularly overestimate the extent to which shame can be used to keep teenagers from boinking one another.)

So what makes me think that Serwer's claim is overstated? I give you, as exhibit A, the relative effectiveness of deliberative democracy.

Political Theory 101. Picture humans in the state of nature. On one version, we're purely rational beings, concerned only with maximizing our own well-being. You and I come upon an apple tree with lots of lovely apples -- all of which are too high for either of us to reach. Should we cooperate to get the apples? Sure. As long as you are the one to go up in the tree and toss them down. After which, I'll leave you in the tree and run off with all the apples. Of course, you know this, and so won't want to go into the tree -- not without, say, a plan to hit me in the head with the first apple you pick. But I, knowing this, will defend myself...and you can see where this is going. We end up with a Hobbesian state of nature, one in which an all-powerful Leviathan armed with absolute powers of life and death steps in to change our incentives and make cooperation look more attractive.

But that's not the only possible vision of life in the state of nature. There's also the version where you and I, bound by some basic understanding of the demands of morality, actually cooperate for the most part. We might still need some sort of third-party to help resolve disputes; after all, we're still mainly self-interested, and not everyone behaves according to the requirements of morality. But in Locke's vision, the state is there mainly to resolve disputes and to protect us from those who don't do what they ought. That's a very different role from the one Hobbes describes, where the state is necessary to protect us from our perfectly rational behavior.

Looking around at the world, it seems pretty obvious that Locke is right. We humans survive perfectly well without a Leviathan. So what is it that Hobbes leaves out?

I'd argue that it's shame.

The simple fact is, most of us follow the dictates of morality most of the time. And it's not fear of getting punished that keeps us on the straight and narrow. Indeed, if the only thing that is keeping you from murdering me is fear of jail, then you're not the sort of person I really want to associate with. So what does keep us behaving morally? John Stuart Mill gives us a pretty good account:

We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience.

Mill tells us that Hobbes got things wrong. Humans aren't motivated only by formal sanctions issuing from the Leviathan. We're also motivated by the opinion of our fellow-creatures. We're worried, in other words, that if we do something that other people don't like, they will think less of us. As children, we start to internalize those worries, such that when we fail to live up to those values that we have internalized, we feel bad about it. That feeling is what we generally call guilt. And, for most of us, guilt and shame can't really be disconnected.

We can certainly disagree about whether or not a particular action ought to be stigmatized. Personally, I side with Coates and Serwer on the specific issue. If a "traditional" family works for you, great. That's your choice to make, and I'm happy to let you make it. But if you prefer to marry a person with plumbing like yours (or half a dozen people or the ghost of your dead lover whom only you can see) and raise a dozen identical clones created from the genes of Brad Pitt and Mother Teresa, then as long as all the participants are on board and you can afford to raise and educate your clone-lets, I wish you well. And I don't think that you should be ashamed of your personal preferences.

But I think we should be careful about the sorts of things to which we attach shame precisely because shame is a socially useful (and hugely effective) too. Discounting the general usefulness of shame because you disagree with its application in a particular case is both wrong and counterproductive.

Share this

Shame also reduces to self-interest

Shame serves the self as well. Gaining the approval of others gains their cooperation, which in a species as highly dependent on cooperation as humans, is key to our individual survival. Shame, then, is not a quality that we have in addition to our self-interest. It is a tool of our self-interest. The problem with the Hobbesian picture isn't that it depicts us as purely self-interested when we are not, but that it fails to take the analysis of self-interest far enough.

Good point

You're right that Hobbes doesn't give enough consideration to what real self-interest entails. I believe that the standard critique is that Hobbes' reasoning relies upon a prisoner's dilemma, but that he failed to understand that what is rational in a single instance of a prisoner's dilemma (namely, defection) is irrational in iterated prisoner's dilemmas.

And I agree with you that shame is in fact a tool of our self-interest. Or, at any rate, I'd agree with you that it can and should be a tool of our self-interest. But shame is also inherently social; I learn to feel shame when I violate the standards of my community. That makes shame potentially quite illiberal, too. And, in some cases, it can work against my self-interest (as, for example, if my community teaches me to feel shame over my inborn preferences for marrying my ghostly lover).

So I am on-board with you this far: to the extent that our actual society approaches a perfectly liberal (in the classical sense) society, shame is a tool of our self-interest. Where our society fails to be liberal, shame may or may not be in our self-interest.

Boiled down

I believe that the standard critique is that Hobbes' reasoning relies upon a prisoner's dilemma, but that he failed to understand that what is rational in a single instance of a prisoner's dilemma (namely, defection) is irrational in iterated prisoner's dilemmas.

I like boiling things down, and, since Hobbesian thinking is still widespread, this may be a good way to boil down a lot of political disagreements: into the prisoner's dilemma camp, and the iterated prisoner's dilemma camp.

Agreed. Again.

I think this is also the essence of a lot of the dispute between liberals and libertarians. Is X an instance of a prisoner's dilemma (i.e., an actual public goods problem) or is X an instance of a problem that iterated prisoner's dilemmas (i.e., the market) can ultimately fix.

That is a brilliant dichotomy

I think it's also a dispute that occasionally comes up between natural right'ers and consequentialists. Example:

Natural rights'er: "So do you think it's okay to kill a healthy dude and harvest his organs to save 10 sick dudes?"

Consequentialist: "A society which tolerated this kind of policy would soon have nobody going to the hospital and no organs to harvest."

Or, to quote Matt McIntosh:

I'm afraid that shop-worn example, along with most like it, obscures more than it clarifies. The problem with these kinds of contrived moral dilemmas is that they're presented completely free of context, and neglect the obvious fact that here in the real world, ethics is applied recursively -- the outcome at iteration t is fed back into iteration t+1. (I might add that this is also where many economists go off the rails.) The organ-harvesting algorithm may maximize utility for an iteration or two, but after several it would obviously start to become degenerate.

Consequentialism is not well defined

As Scott Scheule has occasionally pointed out, consequentialism begs the question, since it is a function of what consequences you consider good. (Not sure he would agree with the wording here.)

So substitute "utilitarianism" for "consequentialism". You mention only consequentialism but similar points have been raised regarding utilitarianism. My disagreement with utilitarianism is from the standpoint of a game-theory-based account of natural rights, so you can imagine my exasperation at attempts to defend utilitarianism against natural rights by patching it with considerations taken from game theory such as iteration (as has been done in the past). It's a bit like defending communism against a capitalist critique by pointing out what a success China has been lately.

Briefly, game theory acknowledges the separateness of one player's utility function from another's and never seeks to combine them, sum them, into a global utility function. Instead it leaves them separate and draws conclusions about the optimal strategies of the participants. Utilitarianism, by contrast, attempts to combine them. And that, I think, is the heart of the problem with utilitarianism. The combination of utility functions strikes me as being a hasty and now long-obsolete patch to a fundamentally flawed theory in something like the way the wavefunction collapse strikes me as being a hasty and now long-obsolete patch to a fundamentally flawed interpretation of quantum mechanics (i.e., Copenhagen).

As I understand it, "beg the

As I understand it, "beg the question" is an expression largely misused. People use it as a synonym for "raising the question" when it's really properly about "assuming the conclusion in the premises." Because I've used it in both senses, and I don't want to seem stupid for misusing it, I usually avoid the idiom altogether nowadays.

But apart from that, I agree more or less, except for the bit about quantum mechanics, as between interpretations I'm agnostic. The many-worlds interpretation is, from what I can tell, in the ascendant, and the Copenhagen's in the decline, but I don't think the amount of time passed nor the level of consensus justifies calling the latter long-obsolete.

Consequentialism is not only generally ill-defined, but dishonest. It's, at best, an attempt to smuggle in a more easily-criticized moral stance -- utilitarianism -- by giving it another name. At worst, and from what I can tell, most frequently, it's used in such a vague fashion as to be a completely worthless concept (although a pretty-sounding one). Imagine someone advocating rightism, which means, why not, being in favor of rights. The term would be similarly vapid -- which rights? you'd ask, as a matter of course. Which is why the term "natural rights" is instead used, as it implies some sort of more specific, Lockean suite of rights.

Perhaps "consequentialism" implies some suite of good consequences that I've simply never picked up on, but my understanding is, as I've said, all it means is "utility" or, perhaps, "economic efficiency" (so far as those two things diverge).

Backward

I've always understood utilitarianism to be a subset of consequentialism, rather than a synonym for it. Ethical egoism (as, say, Harry Browne describes it) is a consequentialist account of morality.

To use some technical terms (sorry) consequentialism is a statement about normativity; it tells us what the right-making properties are (namely, consequences). Utilitarianism is a subset; it assumes the same right-making properties, but adds an axiological property, namely, that the consequences we should look for are those that maximize overall happiness.

There are lots and lots of varieties of consequentialism, from Bentham's crude act-utilitarianism, to Mill's qualitative hedonism, to (possibly) Hume's rule-utilitarianism to (more recently) Hare's two-level utilitarianism and Brad Hooker's rule-consequentialism. I realize that it's a fun parlor game to bash Benthamite utilitarianism and thereby assume that all forms of consequentialism are therefore dead. That's a little like "proving" that libertarianism is false by showing that Ayn Rand is totally incoherent.

I Don't Disagree

I agree that utilitarianism is properly a subset of consequentialism -- my point was about how people generally use the term, in an incorrect fashion, i.e., as a synonym. That people use it incorrectly is the reason for my complaint.

You recognize there are lots of varieties of consequentialism. I'll do you one better: there are an infinite amount of varieties, since people may prize whatever consequences they like. Thus using the term "consequentialism" doesn't communicate much about a person's political and moral predilections: this raises the question, what do people intend when they profess consequentialism, without a further qualification?

My general impression is that they intend it to be synonymous with utilitarianism.

I don't have that impression

I've generally seen "consequentialism" used as a way of contrasting with "deontology." There are a near-infinite number of ways to interpret the latter, too, though for most people "deontology" is typically used to mean something vaguely Kantian. But both deontology and consequentialism are useful as shorthand for a general strategy (namely, whether it's consequences or individuals that matter most to morality.)

Indeed, properly speaking, calling oneself a utilitarian isn't all that much more illuminating about a person's actual moral commitments. There are a whole lot of flavors of utilitarianism, and they aren't always all that compatible with one another.

Of course, as I mentioned to Richard below, my own impression is quite possibly tainted by having spent way too much time around philosophers. I've no doubt that lots of non-philosophers use "consequentialism" as a pretty way of indicating "Benthamite act-utilitarianism." But it seems to me that DR readers are more philosophically sophisticated than average, so I wouldn't want to assume that the two terms were always being used interchangeably here.

Best guess: we're just

Best guess: we're just describing different samples.

My sample is this and associated libertarian blogs. Above, I used the general term "people" instead of the more specific "people I find at this blog and other similar blogs." This was an error; I should have been more specific about who I was describing. I would hope philosophers use terms more accurately, and, in my experience, they do.

Lest I be misinterpreted, I should clarify a few things. I disagree consequentialism and deontology are useful for describing strategy, i.e. statements like your -- whether it's consequences or individuals that matter most to morality -- strike me as uselessly vague. These terms are only useful in describing strategy so far as they are associated with something beyond their most abstract, philosophically-proper sense: when deontology, for example, implies something Rawlsian or Lockean, and consequentialism implies utilitarianism, et al. Indeed, I suspect -- though I've yet to formulate a supporting argument to my satisfaction -- that at base consequentialism and deontology are identical.

I agree that being described as a utilitarian leaves a lot to be desired, but it is far superior to being described as a consequentialist.

Finally, you seem to be under the impression that when I say "consequentialism is being used as a synonym for utilitarianism" I mean, specifically, "Benthamite act-utilitarianism." I do not: I intend utilitarianism in its broader sense.

I agree DR readers are more philosophically sophisticated than the average bear. But they are not perfect -- I know this through, among other avenues, introspection -- and usage of the term "consequentialism" here is one such instance of that imperfection.

To quibble

The many-worlds interpretation is, from what I can tell, in the ascendant, and the Copenhagen's in the decline, but I don't think the amount of time passed nor the level of consensus justifies calling the latter long-obsolete.

To quibble with your quibble, MW defines "obsolete" as:

no longer in use or no longer useful

Since your evidence is consensus, you seem to be talking about use. I'm referring to usefulness. An idea is useful until a new idea comes along which renders it superfluous. Everett came up with his interpretation in the fifties, rendering the collapse superfluous. This is a matter on which reasonable people disagree, hence "strikes me." Reasonable people also disagree on utilitarianism, making the comparison that much more apt.

On the matter of begging the question, I probably misused the phrase but not all that badly, IMO. It means a circular argument. I used it to mean a circular definition (the good is defined as that which results in the good). The Wikipedia article on circular definition links to the article on begging the question, and vice versa.

Microquibble

Since your evidence is consensus, you seem to be talking about use. I'm referring to usefulness.

I wouldn't draw a sharp distinction between the two. If people are "using" a concept, I presume they do so because they find it "useful."

The distinction

If people are "using" a concept, I presume they do so because they find it "useful."

Naturally, but to restate the distinction: you are in effect summarizing the results of a survey of answers to the question Q (in this case, Q = "is it useful"). I am not conducting a survey, but am answering the question Q. I am emphasizing that it is my answer by saying "it strikes me".

We are in heated

We are in heated agreement.

I think you've clarified our positions aptly enough that I don't need to add more.

Note that Your Natural Rights'er Never Mentioned Hospitals

That's more obfuscatory than anything.

First, it does not matter if a moral dilemma is contrived -- what matters is that it remains possible, and so long as it is possible, it serves as a counterexample (if successful) to whatever it's being used against.

The utilitarian says utility is all that matters. The utilitarian-critic says, "No, look at this situation S, where you would pick the non-utility-maximizing choice. The fact that you make such a decision proves that there are considerations besides utility." Pointing out that Situation S is unlikely, even absurdly so, does nothing to save that point.

One can try and sneak around such thought experiments by invoking new considerations (allowing X may inspire people to do very bad thing Y), but the questioner can always fix that (assume nobody knows you did X). Or, to appeal to authority, as Friedman says in The Machinery in reference to a different thought experiment, "All such evasions are futile. I can always alter the assumptions to force the issue back to its original form." So weaseling out of the hard choice is a waste of both parties' time.

As to:

Consequentialist: "A society which tolerated this kind of policy would soon have nobody going to the hospital and no organs to harvest."

Statements like these, though common, are a little too convenient. This consideration might be true, it might not, but regardless, I am suspicious of anyone who claims to have enough data to predict what result such a policy would have. This suspicion leads me to believe the arguer doesn't actually believe in this convenient third condition, but rather is simply trying to rationalize an inner anti-utilitarian hunch while remaining true to utilitarianism. More honest, I'd say, just to admit that utilitarianism is flawed than appeal to systems -- like human society -- so complex as to be very difficult to predict, esp. when such systems have, as Matt rightly points out, recursive elements.

Just outta curiosity

Scott - I'd be curious to hear any comments you have on my post 'Institutional Rights'. (I think you're really misunderstanding the theoretical role of utilitarianism, at least on contemporary 'two level' views. It isn't meant to guide our everyday decisions, so organ-harvesting cases are rather besides the point. Instead, the theory serves at a 'meta' level, to determine - for any given context/society - what our ground-level rules and rights should be. For a real test case of utilitarianism vs. natural law, then, we must consider a consistently different context, as I do in my post 'The Contingent Right to Life'.)

Your story is not entirely realistic

Your story is a story about what would happen:

As time passed, the institution become more embedded in the society, and the folk resigned themselves to it. [... and in the end ... ] they could all see that their society's response to it was entirely appropriate, and indeed morally mandatory.

You are not defining the good in any particular way but imagining how the folk would end up defining the good for themselves.

I mentioned some time ago that evolutionary considerations suggest a different outcome. Defectors would disproportionately tend to reproduce. Sure, thirty would die for the one who survived, but that would only serve to enhance his relative fitness. Ultimately, survivor genes would fill the gene pool, driving out martyr genes. The end result would be the destruction of the species (unless births outpaced deaths), but before that happened the species would be survivors through and through with no martyr among them, the martyrs having been eliminated by their own willingness to sacrifice themselves to save thirty others.

The God of your thought experiment is in effect a scientist who has imposed a scenario on the species which requires group-selectionist adaptation in order that the species survive. Group selection requires that individuals sacrifice themselves and their potential progeny for the sake of non-kin (must be non-kin, because if kin, then we're in selfish gene territory). The result of your scenario would in all likelihood be, not the evolution of the species along group-selectionist lines, but the destruction of the species (unless the species reproduced fast enough).

Here, try this alternative scenario: on the real planet Earth a lot of people die every day. Imagine that God appeared and said that he would save thirty people's lives every month, but only so long as one person, picked by random lottery, was executed. To make this even closer to your scenario, God could say (and he wouldn't be lying) that He would be striking down thirty random people today but would refrain from doing so if the one person picked by lottery was executed. The observable situation is the same, only the patter is different.

The alternatives are:

1) We (every single person) would reject the offer and the world as we know it would go on.

2) Somebody (not "we" - there is no collective self) would take it upon himself to accept the offer, and would execute the person selected by God, thereby saving thirty people.

My guess is that in the end (1) would prevail. I believe I could back up this guess with selectionist reasons.

But there's another reason: this same scenario could be reproduced by us right now even without participation from God. A hospital could go around and kidnap a random healthy person, harvest his organs, and save thirty. That is, of course, a familiar scenario, and one which has been used to discredit utilitarians. In essence, your scenario (and it is your scenario, since I've only changed it superficially) reproduces the infamous organ-harvesting scenario that utilitarians have rejected as a crude attack on utilitarianism that really only succeeds in attacking crude act utilitarianism. Only, you've come down on the opposite side of this scenario. You are actually advocating the outcome which utilitarians have agreed with non-utilitarians is obviously evil and monstrous.

For the sake of completeness, I would like to recognize in advance that you might actually be able to come up with a scenario in which humans evolved a morality sharply at odds with our current morality. This, however, would serve to demonstrate merely that human morality is not universally applicable to all species. A sufficiently different environment will produce a different creature with a different nature. Natural law proceeds from human nature, but if you change that nature all bets are off.

I've been talking about biological evolution in relation to your scenario, but you might try to argue that the evolution would be cultural - memetic. But the "try to survive" meme has the natural advantage that it preserves one of its carriers. Of course, a "kill yourself for the cause" meme may also do well, but only if the sacrifice disproportionately encouraged the spread of that meme. The "kill yourself to save thirty strangers" meme has the effect of encouraging the survival of whatever memes are carried by those thirty, and so derives no special benefit. It should die out.

I disagree that I am

I disagree that I am misunderstanding anything, obviously. It seems to me you're only restating what Jonathan said, which was, roughly, the difference between rule and act utilitarianism: my argument was that bringing up this distinction does nothing to discredit thought experiments that suggest utility is not the only moral aim.

But as I respect your opinion, and I'll try to find time to read your post.

I feel a lot more

I feel a lot more comfortable with my own ethical ideas since I've given up the belief that I need to have a completely consistent set of ethics. The utilitarian ethic describes something that we all value. Appealing social systems have a healthy dose of Utilitarianism in them.

That doesn't mean we have to follow Utilitarianism, or any other philosophy, to stomach-turning consequences. After all, the reason we like Utilitarianism in the first place is that it fails to make us queasy most of the time.

Yes, one option for dealing

Yes, one option for dealing with many ethical problems (or philosophical problems, or physics problems) is simply abandoning consistency. The countervailing force is peoples' strong taste for consistency.

Seeming inconsistency

An observed inconsistency may simply be apparent, a clash of explanations rather than a clash of judgments. A court delivers twenty judgments. You develop a moral theory explaining the first ten, and someone else develops a moral theory explaining the last ten. These theories clash. This does not mean that the judgments are in conflict - your explanatory theories may merely be in need of revision.

Exactly

I agree.

Morality is a Strategy

The reason why it's easy to come up with counter examples is because moral systems are strategies for coping in the world. The world is too complex a place for a single winning strategy to be successful under all conditions.

great! that's exactly what

great! that's exactly what i'm working on! exactly the same worlds, "moral systems are strategies"

Alex, More on Moral Systems

Alex,

I expanded on my thinking on moral systems at qando if you want to take a look. Also some more here.

Another discussion where "moral systems" came up

Here's another discussion on this blog were "moral systems" came up.

Here yet is another.

The reason I use

The reason I use "consequentialism" rather than "utilitarianism" is because when I think of "utilitarianism", I think of "greatest good for the greatest number" which is emphatically not the idea that I'm trying to convey. Rather, I want to convey "results which most people, based on my knowledge of people, desire in their ideal society." Not everyone will desire those results. Yes, this definition is vague, but that is due to the limitation of human congruity and my knowledge of people. It also avoids the question of "what are the objectively good consequences". Just like I don't think there is an objective morality wrt rights, I don't think there is an objective morality wrt consequences. My use of "consequentialism" implies that these questions do not have an ultimate answer which utilitarian seems to imply ("greatest good for greatest number"). You might think this gutless; I'm not after guts. I'm after showing as many people as possible that their preferences align with certain policies. Policies are inherently recursive; one-time ethical dilemmas are not.

This consideration might be true, it might not, but regardless, I am suspicious of anyone who claims to have enough data to predict what result such a policy would have. This suspicion leads me to believe the arguer doesn't actually believe in this convenient third condition, but rather is simply trying to rationalize an inner anti-utilitarian hunch while remaining true to utilitarianism. More honest, I'd say, just to admit that utilitarianism is flawed than appeal to systems -- like human society -- so complex as to be very difficult to predict, esp. when such systems have, as Matt rightly points out, recursive elements.

Sure, we do not have perfect knowledge, but do we not have quite a bit of knowledge about human nature, even if it's imperfect? Isn't the assumption that humans are generally self-intersted vital to economics and doesn't it help us model and predict human behavior (however imperfectly)? Can't you reasonably predict that if a country is taken over by a communist dictator, the people will suffer?

Isn't the assumption that humans are generally status-seeking a key to understanding hierarchies? Isn't the assumptions that people will choose to benefit their families and children over strangers valid?

Don't we know quite a bit about how people will react in certain situations and thus, under certain policies?

Looking a policy consequences always involves recursive elements which, I agree, can be quite messy, and not always fully predictable. But it's also not completely unpredictable. Economics, sociology, and psychology can help us.

Tu Quoque

The simple fact is, most of us follow the dictates of morality most of the time. And it's not fear of getting punished that keeps us on the straight and narrow.

But neither is it fear of internal punishment (guilt or shame). At least, "if the only thing that is keeping you from murdering me is fear of [feeling guilt or shame], then you're not the sort of person I really want to associate with."

The simple fact is, most of us actually care about other people and their well-being. (Surely you don't really believe in psychological egoism, now?)

Possibly overstated?

I do in fact care about plenty of other people. But not necessarily so much about the dude who is standing in the middle of the subway door fiddling with his briefcase while people are trying to get on the train. It's not so much my innate love of humanity that keeps me from shoving him down and walking in. And I suspect that, for most of the rest of the people on the platform, the same applies.

Certainly my feelings for others are sufficient to keep me in line with respect to those for whom I actually have feelings. But you're assuming rather a great deal of moral sainthood if you want to assume that my general love of all persons everywhere is what keeps me following the dictates of morality around total strangers. Maybe that makes me a moral monster. More likely, it makes me the kind of being that evolved to innately protect groups but not humans in general.

And more

You can care about people in the generalized sense that you are motivated not to cause them harm (for their sakes, and not just as a means to soothing your conscience), without having to "feel" any kind of saintly love (or much of anything) for them.

It's possible that you're a moral monster. But I think it's more likely that you just aren't introspecting clearly.

See also Gil Harman's paper 'Guilt-Free Morality' [PDF]

Sure

You can care about people in the generalized sense that you are motivated not to cause them harm (for their sakes, and not just as a means to soothing your conscience), without having to "feel" any kind of saintly love (or much of anything) for them.

Yes, it is entirely possible that we are simply averse to harming others, not for any reason but just for its own sake. However, our aversion to harming others probably exists in us because it serves our self interest. Natural selection has shaped us to behave in a way that serves our own interest. Consequently, our behavior makes sense from a self-interested standpoint.

I'd like to point out

That this makes three times on this same thread that we have agreed. Should we be looking around for four dudes on horses?

You're certainly right

People can in fact care about others in a general sense. It's just that we typically don't. There's more than a little work in ev psych that would tend to confirm this sort of view. At best, we are evolved to feel altruism toward our group, though even this is a tad controversial.

You're essentially responding to an empirical claim with a theoretical one. If you ask people standing on the train platform (to return to my hypothetical) why they don't knock the inconsiderate boor out of the way, it's unlikely that many of them would tell you "Well, because deep down, I actually care about about him and wouldn't want to cause him harm, for his very own sake." Rather, you'd be more likely to get responses like, "That cute girl over there would think I'm an ass" or "I'd feel bad about it later."

Certainly we're capable of moving beyond our genetic programming and caring about others in the abstract. We just don't do that very often, and it's not what actually motivates us. I don't doubt that you really do believe that you feel a genuine concern for all of humanity. That's...not atypical of first-year philosophy grad students, particularly those working in moral theory. What I suspect you'll also find is that the moral intuitions of philosophy grad students don't really match up that well with pretty much anybody else. Yours are not untutored intuitions.

I suspect you'll also discover at some point that people who happen to disagree with your dissertation adviser aren't necessarily doing so because they have failed to carry out careful introspection and are thus operating under some sense of false consciousness. One might, for example, take issue with a paper whose main empirical "evidence" is (by its own admission) purely anecdotal. One should be especially wary when those anecdotes come from acquaintances of professional philosophers. Because, as I've mentioned before, it's hard to find a group whose moral intuitions are less representative of the world at large than those of professional philosophers.

Erm

Gil isn't my advisor, I'm not a first-year, and I don't believe I said anything about most folk's moral intuitions. I don't know how many times I have to tell you I'm not talking about "feeling" altruistic. And people's self-reported motivations are irrelevant.

My point about your poor introspection was motivated by the following thought: if you desire A purely as a means to B, then (by definition) you will be just as happy with the option of achieving B by any other equally effective means. When we plug in 'avoid harm to others' for A, and 'avoid guilty feelings' for B, I suspect this simply isn't a psychologically accurate description of what most people in fact want. Most people wouldn't want to have their consciences removed, for example, even if we could guarantee it wouldn't damage their social status or other interests.

For a toy example, suppose I offer you the following:

(OPTION) As soon as you press the button, (i) great harm will befall many distant people, (ii) your memory of this choice will be wiped, and a rush of mild pleasure will sweep away any traces of guilt or shame that began to form in the moment between your decision and the pressing of the button. Further: the set-up guarantees that nobody else will ever find out your role in the disaster or hold it against you. As a bonus you get $100 (though of course you won't remember why). So, from a self-interested perspective, it's all good.

Would you take this option? Do you think "most people" (not in dire straits) would? It's an empirical claim, of course, but so is the claim that most people like chocolate more than mud.

Talking past each other

Or, if not talking past, then possibly talking about slightly different things. I don't doubt that, in some limited sense, people genuinely do feel something for other people. But the question I was getting at in my initial post (as well as my last follow-up to you) is why they do so.

Consider, for instance, a small child. They do all sorts of fairly immoral things. They say mean things, take other kids toys, hit and bite them, etc. Little children, in other words, do not feel any sort of innate sympathy toward other humans. It's just not part of our basic human programming. Indeed, give a child the example you offer me (perhaps replacing the $100 with some candy) and watch how fast they jump on it.

So how do we get little kids to stop this sort of behavior? Initially, we appeal to their own self-interest by, say, rewarding good behavior and punishing bad behavior. Later, we make those punishments less explicit (by, say, giving or withholding approval). Eventually, children internalize the lessons; for normal functioning people, that means attaching feelings of guilt to violations of morality.

Now, admittedly, at some point in our development, we don't behave morally explicitly to avoid guilt. Harman's argument is that the moral sense of fully-developed adults is logically independent from guilt. And he argues that at least some of his friends seem to bear that out. I don't think that's at all wrong. But I do think that it (along with your example) is rather beside the point.

My initial discussion was about the role that guilt plays in developing our sense of morality. Telling me that you -- a 23-year-old, second-year (maybe third, if you're really precocious?), philosophy grad student -- don't act explicitly from feelings of guilt is a bit beside the point. The fact is that many -- probably most -- people learn to feel whatever distant feelings for others they may have largely via guilt. The point is not that guilt explains what people want. It's that, for most of us, guilt explains why we want what we want.

The best data we have (which, as I'm sure you know, is not the plural of anecdote) seems to indicate that a moral sense is not innate. And the best story we have for how most of us acquire that sense is tied up with guilt. Which means that it would be wrong to assume that guilt is somehow useless in motivating all sorts of people. It may not motivate you anymore, but it does, as a matter of fact, motivate a whole lot of people. And that means that as a method for changing behavior, guilt can be a powerful tool. And that's really what the initial argument was about.

Just for the record, I did not intend to insult you by calling you a first-year grad student nor did I mean to imply literally that Harman is directing your dissertation. I think perhaps you took my point over-literally. It's easy enough to assume, particularly in online conversations, that you know way more about a subject than anyone else in the room. I was simply attempting to point out that there is always the possibility that the person you're presumptuously dismissing as having failed to be introspective might turn out to be, say, chairing a hiring committee when you go on the job market in a few years.