On the Plurality of Values

Late last year I started taking some stabs at fleshing out the philosophical program outlined by Nick Weininger back in 2005, and mumbled some promises of further posts Real Soon Now. Other matters have consumed my attention in the interim, but Will Wilkinson has at last awakened me from my torpor by coming dangerously close to writing my posts for me, so I'd better get cracking again.

The general thrust of my previous two posts has been to argue that consequentialist and deontological ethical reasoning are mutually intelligible and complementary to one another rather than being incommensurable and opposed as is commonly assumed, and that the airy-fairy arguments over them are a veil for substantive disagreements about what matters in ethics. The fundamental problem of ethics isn't how to judge whether an act is good or bad or right or wrong, but rather what is ethically relevant: what matters, and why? Any talk about how to judge actions presumes an answer to this question.

The consequentialist vice is simply to skirt daintily around this question, merely making arguments of the form "if you do X, Y will happen -- and you don't want that, do you?" I have a great deal of sympathy with this approach, both because it usually works well and because it's easy: you can get a lot of mileage out of simply explicitly deducing the consequences of a policy and letting people's moral intuitions do the normative legwork for you. But this is philosophically unsatisfying, since it leaves a great big blank where the consequentialist maximand should be. * What do two consequentialists do when they both agree on what the consequences of a policy are likely to be, but one still favors it while the other doesn't? Shrug and walk away in mutual bafflement, most of the time.

Deontology presents us with the opposite problem: rather than having no foundation, it has an embarrassment of foundations. Ask three deontologists a tricky moral question and you're likely to get three different answers, depending on what duties and rights they think people have. Those of a Rawlsian bent might argue for a right to a basic minimum income (presumably to be provided by taxation), while those of a Randian bent would assert an absolute right not to be taxed. How do they figure out who's right? Usually by seeing who can pound the table the longest and hardest.

Ultimately both of these problems have the same terminus: they stand and fall on people's moral intuitions about what's important. And the reason these disagreements haven't gone away is simply the brute fact that people value lots of different things at different orders of importance. Will's post teases out some of the implications of this fact for political theorizing:

Next, consider the diversity of the components of well-being and the potential conflicts between them. Health and longevity are components of well-being if anything is. But so is the individual achievement of valued aims. Some people’s perfectly reasonable aims may be incompatible with maximizing their health and longevity. Imagine a cholesterol-saturated gourmand who would rather die than give up his foie gras, or an adventurer who draws profound meaning from facing down life-threatening challenges. So… how much weight do we give to one component of well-being — health and longevity, say – relative to another — for example, the achievement of valued aims that conflict with maximal health and longevity? The answer is that there is no answer — no answer science and empirical evidence compels us all to agree on, at any rate.

The upshot, then, is that while we can measure various dimensions or components of well-being — whether it be health and longevity, the experience of pleasure, a sense of self-efficacy and control, the development of basic human capacities, or the achievement of valued aims — we cannot measure well-being as a whole because Mother Nature has nowhere posted a table of exchange rates between the various values that compose individual welfare. It’s simply not out there for the scientist to find.

For any value you choose, it's possible to construct hypothetical scenarios where rational individuals would trade off some of that value against another one. Value pluralism is a brute fact that any serious ethical theory has to deal with somehow, and so far as I can tell there are only three ways to do so:

  1. Subjective values are are all there is, and there is no objective fact of the matter about what's good or bad.
  2. There really is only one true Good, and when people pursue anything else it's simply due to error.
  3. There are lots of things that are good and bad, and these things aren't reducible to a single underlying variable.

Position (1) is usually taken by libertarians of an economistic bent, but is unsatisfactory when we consider meddlesome preferences because it doesn't allow us any basis on which to discuss and evaluate states of affairs: I want this and you want that, and where these conflict we have to hash it out either by votes or violence. Position (2) is the one taken by members of various One Big Thing schools of thought, like utilitarianism and Objectivism, but runs into epistemological difficulties. Position (3) appears to be Will's, and strongly informs his contractarian reasoning: if there's no consensus on value, the best we can do is to build a neutral framework in which people's pursuit of multifarious values can be accomodated to the maximum extent possible.

This is a line of thought with which I sympathize, which is why I'm going to place it under scrutiny in a later post. Uh, yeah, Real Soon Now.

*Of course, straight-up Benthamist utlilitarians bite the bullet and plant their flag on subjective sensations of pleasure, but the problems with this have been pointed out so exhaustively by other people that I don't really consider it worth my time to rebut.

Share this