You are currently viewing the aggregator for the Distributed Republic reader blogs. You can surf to any author's blog by clicking on the link at the bottom of one of his/her posts. If you wish to participate, feel free to register (at the top of the right sidebar) and start blogging.
The main page of the blog can be found here.
Because I can't not pass on a joke that combines neurology, politics, and bad taste all in one shot:
Whatever you might think about whether or not the government should be able to remove children from abusive family situations, everyone ought to be disturbed by the fact that they can presumptively do so (400x) based on nothing more than a falsified phone call. I feel like a broken record, but it bears repeating: police are not supposed to be used in petty disputes between citizens, whether these are personal or ideological.
When the subject of huge pharma company profits comes up, I usually contend that this is largely due to their powers of marketing: if you have two medications that do more or less the same thing, one brand name and one generic, people will usually demand the more expensive brand name one. It did not occur to me, though it should have, that this could ironically confer an actual benefit: since people stupidly use price and brand as a proxy for quality, this can prime placebo effects over and above what the generic would, making even chemically identical pills more effective. Is it wrong that I find great humor in this?
A Guest Post by John & Oskar
"Let us look more closely at the type of economy which is represented by the 'Robinson Crusoe' model, that is an economy of an isolated single person or otherwise organized under a single will. This economy is confronted with certain quantities of commodities and a number of wants which they may satisfy. The problem is to obtain a maximum satisfaction. This is . . . indeed an ordinary maximum problem, its difficulty depending apparently on the numher of variables and on the nature of the function to he maximized; but this is more of a practical difficulty than a theoretical one . . .
Consider now a participant a social exchange economy. His problem has, of course, many elements in common with a maximum problem. But it also contains some, very essential, elements of an entirely different nature. He too tries to obtain an optimum result. But in order to achieve this, he must enter into relations of exchange with others. If two or more persons exchange goods with each other, then the result for each one will depend in general not merely upon his own actions but on those of the others as well. Thus each participant attempts to maximize a function (his above-mentioned 'result') of which he does not control all variables. This is certainly no maximum problem, but a peculiar and disconcerting mixture of several conflicting maximum problems. Every participant is guided by another principle and neither determines all variables which affect his interest.
This kind of problem is nowhere dealt with in classical mathematics. We emphasize at the risk of being pedantic that this is no conditional maximum problem, no problem of the calculus of variation, of functional analysis, etc. It arises in full clarity, even in the most 'elementary' situatioins, e.g. when all variables can assume only a finite number of values.
A particularly striking expression of the popular misunderstanding about this pseudo-maximum problem is the famous statement according to which the purpose of social effort is the 'greatest possible good for the greatest possible number'. A guiding principle cannot be formulated by the requirement of maximizing two (or more) functions at once.
Such a principle, taken literally, is self-contradictory. (In general one function will have no maximum where the other function has one.) It is no better than saying, e.g., that a firm should obtain maximum prices at maximum turnover, or a maximum revenue at minimum outlay. If some order of importance of these principles or some weighted average is meant, this should be stated. However, in the situation of the participants in a social economy nothing of that sort is intended, but all maxima are desired at once—by various participants.
One would be mistaken to believe that it can be obviated, like the difficulty in the Crusoe case . . . by a mere recourse to the devices of the theory of probability. Every participant can determine the variables which describe his own actions but not those of the others. Nevertheless those 'alien' variables cannot, from his point of view, be described by statistical assumptions. This is because the others are guided, just as he himself, by rational principles—whatever that may mean—and no modus procedendi can be correct which does not attempt to understand those principles and the interactions of the conflicting interests of all participants.
Sometimes some of these interests run more or less parallel—then we are nearer to a simple maximum problem. But they can just as well be opposed. The general theory must cover all these possibilities, all intermediary stages, and all their combinations."
—John von Neumann & Oskar Morgenstern, The Theory of Games and Economic Behavior (p. 10-11)
And now for something completely different. Read with tongue ever-so-slightly in cheek.
First Circle—The Virtuous Heathens: Those who care strongly about liberty in one particular sphere (e.g. freedom of speech, freedom of religious practice, the drug war, etc.) but don't care much about it other spheres. These people are infuriating for their lack of general theory underlying their politics, but at least they've sorta got the right idea and can make themselves somewhat useful. This circle contains members of the NRA, ACLU & other such single-issue organizations, and is guarded by John Stuart Mill.
Second Circle—The Lustful: Those who fall madly in love with a dim vision they have of a more egalitarian society and then hastily rush off to elope with it, without giving much thought about just how much promise there really is in the relationship. These people's hearts are often in the right place but they show a frightening lack of concern for whether or not the policies they endorse are actually likely to accomplish the goals they desire. This circle is filled with innumerable bleeding-hearts and is guarded by Thomas Sowell.
Third Circle—The Gluttonous: Those who support illiberal policies simply out of percieved self-interest, and like to paint themselves as victims despite living at a level of material comfort that most previous generations would consider luxurious. Immigration & outsourcing restrictionists, farmers, labor unions, people who want to be insulated from the costs of their health care, etc. This circle is guarded by Benjamin Franklin.
Fourth Circle—The Greedy: Lobbyists who think their peculiar obsession should be the government's top priority. Corporations pleading for protectionism, finger-wagging nannies on a crusade to enforce "public virtue", and generally anyone who wants everyone else to suffer for their sense of ideological privilege. This circle's most recent acquisition was the execrable Jack Valenti, and is guarded by Milton Friedman.
Fifth Circle—The Wrathful: People who are socialistic primarily out of an ugly resentment of the wealthy or anyone else they percieve as enjoying benefits they privately wish they could enjoy. Their instinct is not so much to see everyone doing well as to see those currently best-off doing much worse. This circle is guarded by Ayn Rand, which I think is a suitable punishment for both parties.
Sixth Circle—The Heretics: People who do seem to generally care about liberty but have an anomalous and largely sentimental attachment to illiberal policies in at least one sphere. They support freedom except the freedom to do drugs, or get an abortion, or freely migrate, or do anything they imagine undermines the war effort, etc. This circle is guarded by Ron Paul.
Seventh Cirle—The Violent: This circle is packed with tyrants large and small, politicians, bureaucrats and thugs—those who unquestionably do active violence to human freedom. (I would also add overdominant parents to this list.) They're enabled by the members of other circles, but these are the ones who do the actual trigger-pulling. This circle is guarded by Thomas Jefferson.
Eighth Circle—The Fraudulent: The Malebolge of public intellectuals—those who have a sphere of influence greater than most of us, and are negligent in their exercise of it by contributing to the darkness and confusion. This sphere contains everyone from know-nothing idiots like Lou Dobbs of CNN and Bob Herbert of the NYT, to people who are really smart enough to know better yet resolutely avoid any systematic examination of their moral premises, like Matthew Yglesias and Reihan Salam. This circle is guarded by, who else, Friedrich Hayek.
Ninth Circle—The Traitors: Here lie lawyers & law professors, and also a significant number of economists, who have some degree of influence over actual legislation and policy. Particularly the ones like Orin Kerr & Brad DeLong who are smart, reasonable, and may even have some pro-liberty sympathies, but when the rubber hits the road they do some showy handwringing before siding with illiberal policies. This circle is guarded by David Friedman.
The worst thing about a police state is that the police become a weapon in the petty disputes of citizens. Law per se isn't what keeps us out of a Hobbesian war of all against all: It's having institutions that minimize the number and importance of disputes and mechanisms that settle them quickly. Too much law is just as bad (if not worse) for a society as not enough.
Estimates of the cost to date of the Global War on Terrorism run to about $600 billion and are only going to keep going up. Pretty much everyone has an opinion about what the returns to date have been, ranging from "the world is a significantly better place and it's completely worth the cost, plus much more" to "the world is a significantly worse place and we shouldn't be spending a dime on such a disaster". And in between there are hundreds of (presumably coherent, but often mutually inconsistent) more nuanced positions.
The persistance of this disagreement should be setting off alarm bells no matter where on the spectrum you find yourself: If we can't even come to anything remotely resembling a broad consensus on just what the consequences of a policy are, then clearly what we're lacking is broadly agreed-upon, checkable metrics of success. And if we don't have those, then why would anyone spend so much money on a project where we can't even determine the sign of the effect?
Surely it's uncontroversial to say that if we must spend money, it should be spent on projects where we can easily tell if they're having the desired effect or not. There ought not be a whole lot of room for reasonable disagreement on the matter (though there may be plenty of room to disagree on which effects are desirable). But at the end of the day, I have no clue what the hell Iraq is going to look like in 10 years and neither do you. Surely there should be some sort of dicussion about whether this is a prudent investment strategy, even among those not resolutely hostile to the idea of something like the GWoT. $600B is a lot of moolah to throw away on roulette.
Everyone knows that theft is wrong. But if stealing a gun is the only way to stop a madman on a killing spree, we feel that someone should do it—that it would be right. It wouldn't be much of an exaggeration to say that most arguments about ethical philosophy divide according to how a person prefers to resolve this sort of dissonance. Some would say that even though you should do it, it's still wrong to steal the gun. Others would say that theft isn't in fact wrong, but we say it is because most of the time we don't like it. Both of these sound a little crazy, don't they? To my ears the latter sounds less crazy than the former, but reasonable people may disagree, and in any case wouldn't it be better if we didn't have to say anything crazy at all to get out of this problem?
In one of the most underappreciated philosophy papers (PDF) of the past 20 years, David Lewis comes up with a simple solution. (His subject matter is epistemology, but the situation in ethics is formally equivalent.) When we say something like "theft is wrong", there's a tacit understanding that this statement could be followed by a sotto voce ". . . except in cases we're properly ignoring". This sort of convention is commonplace: when we say we love a good rare steak, we don't normally specify all the conditions under which we would not actually want a steak, such as if we were already full, or when we're very ill, or while driving to work in the morning.
But of course, once some smart-allecky ethical philosopher gins up an ad hoc and outlandish counter-example which under most circumstances we'd properly ignore, we're nolonger ignoring it and will then conclude that theft is right within the context of the hypothetical case. There's nothing terribly remarkable about this because the context changed between the times the two statements were made: theft can be, and is, wrong in most contexts we're likely to encounter, but right in a few highly uncommon contexts. We can admit that in the hypothetical case stealing would be right, and then go right back to saying "stealing is wrong" as soon as the philosopher walks away, without fear of contradiction.
Looking at it this way, we have the ironic result that philosophizing about ethics is largely an exercise in destroying ethical norms. Which is what makes notions of right and wrong are so elusive: they're context-sensitive in ways that are sometimes obvious and sometimes more subtle, and when we try to analyze them too closely they vanish, a bit like how your hands vanish when you zoom in on them with a high-powered microscope. Nevertheless, here is a hand.
The picture that results is of a set of limits, or metaphorically speaking a set of fences that you are not to cross, not to trespass. It is not a set of valuations assigned to every possible thing you might do. . . . morality is a set of fences where, if you cross them, you will be violating morality and will be in the wrong, but if you do not cross them, then you are fine. . . .
This also explains why the rules are easy to understand and to state, and why they have exceptions. They’re easy to understand because they need to be easily knowable by everyone. Simple rules are like straight fences. Rules aren’t actually visible, they’re in the mind and not in the physical world as actual fences. And similarly, if you were constructing invisible fences, the best sort of fence you could construct would be a straight fence, because it’s a lot easier to guess where all the different parts of an invisible straight fence are than it is to guess where all the different parts of a crazy curvy fence. All you have to do is bump against the straight fence at two points and then you’ve pretty much got the layout in your head, because two points define a line. But two points do not define a crazy curve. Analogously, the rules need to be simple. Rules, like invisible fences, need to be as simple as possible most of the time.
The way I would phrase this is that the terrain of ethics is fractal, and deciding whether something is right or wrong is analogous to deciding whether a point lies on the inside or the outside the fractal's boundary. When you're far from its boundary that's an easy call to make: you can approximate the boundary with a simple curve such as an ellipse and use that rough heuristic to make the judgement. But because the boundary exhibits ever-finer levels of "roughness" as you look more closely at it, you need ever-more nuanced approximations the closer the point is to the actual boundary. A hypothetical example that seems to show that a proposed ethical rule gets the wrong answer in some cases is merely a way of showing that a proposed approximation passes through the boundary.
There is no easy way to decide every case a priori, simply because in order to do so we'd need to know the complete perimiter, which, having finite minds, we can never do in practice. This is why anyone who claims to be able to decide all possible ethical questions based on a few simple and self-evident axioms is selling snake oil: There's no such complete set of axioms, and the best we can hope for is an approximation that serves us well in all the sorts of cases we've tested it against up to now. Rules are, in a sense, made to be broken -- and then reshaped anew. But equally important for beings with finite minds, our decisions come more quickly and easily the simpler our approximation is, which is why we should seek for rules that are (in the words of some German wiseguy) as simple as possible -- but not simpler.
I only just now got around to reading Will Wilkinson's magisterial Cato Policy Analysis paper on happiness research, and am sorry I waited almost a whole week. It hits all the big points, bringing together a wide array of research with keen philosophical insight to make sense of what happiness research does and doesn't tell us. If you have even the slightest interest in the subject and haven't read it, I strongly suggest you do so. The methodological critiques Will makes are all quite sensible, but probably the most forceful point in the paper is that the lazy assumption made by many happiness researchers that economic growth is at best neutral and at worst inimical to human happiness is utterly unsupported by the best available data and could not be further from the truth. After reading this, any serious utilitarian who doesn't place economic growth at or near the top of their policy priorities will have a lot of explaining to do.
(For more happiness policy goodness, check the latest roundtable at Cato Unbound.)
There's a problematic tendency, particularly prevalent when dealing with emotionally-charged subjects, for people with little or no understanding of a complex subject to handle numbers that come out of that subject in much the same way that Moses handled the Ten Commandments: They are handed down from higher intellectual powers, whose ways are mysterious to us but whose authority is without question. They are the outputs of a black box whose inner workings are completely opaque, but which is quite useful to those looking for a blunt object with which to bash their opponents over the head.
Examples are not hard to find: Creationists making reference to Haldane's limit, global warming deniers talking about arctic ice cores, price control fetishists banging on about Card & Krueger, race-skeptics quoting Lewontin's infamous 85/15 figure, etc etc etc. We've all seen it, many of us have probably even done it at some point, and it's a stupid human trick that's centuries old. Read more »
I don't think that the objection Scott raises to Robin Hanson's idea has much force to it. If the rule was that it's illegal to sell X to anyone with an IQ below 120, then smart people re-selling X to stupid peole would still be illegal. Read more »
Late last year I started taking some stabs at fleshing out the philosophical program outlined by Nick Weininger back in 2005, and mumbled some promises of further posts Real Soon Now. Other matters have consumed my attention in the interim, but Will Wilkinson has at last awakened me from my torpor by coming dangerously close to writing my posts for me, so I'd better get cracking again.
The general thrust of my previous two posts has been to argue that consequentialist and deontological ethical reasoning are mutually intelligible and complementary to one another rather than being incommensurable and opposed as is commonly assumed, and that the airy-fairy arguments over them are a veil for substantive disagreements about what matters in ethics. The fundamental problem of ethics isn't how to judge whether an act is good or bad or right or wrong, but rather what is ethically relevant: what matters, and why? Any talk about how to judge actions presumes an answer to this question.
The consequentialist vice is simply to skirt daintily around this question, merely making arguments of the form "if you do X, Y will happen -- and you don't want that, do you?" I have a great deal of sympathy with this approach, both because it usually works well and because it's easy: you can get a lot of mileage out of simply explicitly deducing the consequences of a policy and letting people's moral intuitions do the normative legwork for you. But this is philosophically unsatisfying, since it leaves a great big blank where the consequentialist maximand should be. * What do two consequentialists do when they both agree on what the consequences of a policy are likely to be, but one still favors it while the other doesn't? Shrug and walk away in mutual bafflement, most of the time.
Deontology presents us with the opposite problem: rather than having no foundation, it has an embarrassment of foundations. Ask three deontologists a tricky moral question and you're likely to get three different answers, depending on what duties and rights they think people have. Those of a Rawlsian bent might argue for a right to a basic minimum income (presumably to be provided by taxation), while those of a Randian bent would assert an absolute right not to be taxed. How do they figure out who's right? Usually by seeing who can pound the table the longest and hardest. Read more »
That's the question the Isonomist takes a stab at answering in his response to my previous post. His answer is "probably yes", but as always there's no replacement for data. I find it hard to believe that no economist has tried to answer this question, so does anybody know of any papers on the subject?
[insert pro-forma ironic comment about unintended consequences here]
Joe already gave an adequate response to the amazing Dr. Chris Clarke's first and second infallible methods of inducing brain haemorrhaging among libertarians (at least, the ones not sharp enough to spot really glaring fallacies). I just want to touch on the third. Read more »