When Professors Aren\'t Wise

Cass Sunstein, in When Crowds Aren't Wise, says:

Often the average judgment, which we might describe as the group's "statistical judgment," will be uncannily good.

What remains widely unappreciated is why, and when, statistical judgments will prove to be accurate or inaccurate. The best explanations come from the Marquis de Condorcet, a Frenchman who offered, in 1785, some simple arithmetic, captured in what is now known as the Condorcet jury theorem...

To see how the theorem works, suppose that a number of people are answering the same question and that there are two possible answers, one correct and one incorrect. Assume, too, that the probability that each individual will answer correctly exceeds 50%. With a few calculations, the theorem shows that the probability that a majority of the group will answer correctly increases toward 100% as the size of the group increases. ..

But for those who embrace crowd wisdom and prediction markets, there's an important qualification. As Condorcet himself warned, his theorem reveals the downside of group decisions. Suppose that each individual in a group is more likely to be wrong than right because relatively few people in the group have access to accurate information. In that case, the likelihood that the group's majority will decide correctly falls toward zero as the size of the group increases.

This would be a correct criticism of a poll, where everyone must respond, and hence the output is governed by the beliefs of the plurality. Part of the beauty of the prediction market, however, is that it does not work like that. Instead, traders are self-selected based on their degree of confidence in their beliefs, and the extent to which those beliefs differ from the market judgements. Predictions are not based on the majority judgement, but rather on the judgement of a self-selected group of people who are willing to wager resources that their opinions are correct. The statement that "the likelihood that the group's majority will decide correctly falls toward zero" is simply wrong, or at least, irrelevant, since prediction markets don't depend on the beliefs of the majority.

The case against Sunstein's argument gets even stronger when you consider that not only is the group of traders a subset of all potential traders, but the traders may vary their bets based on their confidence. A single trader who is very certain that he is correct can move an entire market. Again, this is not reflected in the Condorcet analysis. Now, I'm not saying that the combination of these factors means prediction markets are always right. But having the market be wrong "if the majority of dollars bet by the self-selected group of traders willing to risk resources on the market is wrong" is a vastly rarer condition and thus a vastly weaker criticism than having it be wrong "whenever the majority does not know the right answer."

Next let's consider the empirical example given:

Some prediction markets fail for just this reason. They have done really badly in predicting President Bush's appointments to the Supreme Court, for example. Until roughly two hours before the official announcement, the markets were essentially ignorant of the existence of John Roberts, now the chief justice of the United States. At the close of a prominent market just one day before his nomination, "shares" in Judge Roberts were trading at $0.19—representing an estimate that Roberts had a 1.9% chance of being nominated. Why was the crowd so unwise? Because it had little accurate information to go on; these investors, even en masse, knew almost nothing about the internal deliberations in the Bush administration.

We need to be careful here, as the error is subtle. There are two different criticisms one could make of this particular failure. One of them is the Condorcet argument, which is that the market was wrong because the condition that a plurality knew the right answer was not met. Essentially, this is criticizing prediction markets as aggregators of information, functions which transform individual opinion into an overall judgement. The other is the wholly separate argument that the market was wrong because it did not have access to any useful information.

Which argument applies depends on whether any trader knew about Roberts' appointment in advance. If even a single trader was privy to the inside information of Roberts' appointment, they surely would have been willing to wager all their liquid capital to turn a quick profit. Traders without such information would be unlikely to have large positions, and thus the market consensus would have moved rapidly towards the correct answer. The fact that the markets did not move is thus strong evidence that no trader in those markets had the inside scoop. So the question of whether the market failed as an aggregator does not even apply.

This brings us to the second possible criticism, which is that the markets had poor judgements because no one involved had useful knowledge. But this isn't much of a criticism. The job of an information aggregator is to aggregate information. It has no oracle, no crystal ball - it can only work with what it has. If it doesn't have the information, of course it won't predict correctly (GIGO). So yeah, an aggregator with no information input will produce uninformative output. This is true, but not particularly profound, and has nothing to do with prediction markets. Actually, one of the advantages of prediction markets as aggregators is that they pay for information, hence they are more likely to have relevant information to work with than some other aggregation methods (like polling).

Sunstein concludes:

A computer company executive could sensibly rely on an internal prediction market if she is asking about completion dates for the company's own products in development. But should the manager ask employees about completion dates for competitors' products? That wouldn't be a good bet. When most people are not likely to be right because the group has little relevant information, it's best to ignore their judgments—and to try to find an expert instead.

As I've demonstrated, he would not merely be wagering about whether the plurality of the group of all traders out-predict the expert, he would be wagering whether the plurality of dollars which self-selected members of the group choose to wager can out-predict an individual expert. Having analyzed the wisdom of Google's crowds, and seen Phil Tetlock's results on expert political judgement, if Professor Sunstein wants to put his money where his mouth is, I am prepared to take his wager.

Unfortunately, I doubt I'll be able to take his money. While the linked article, published Sep 1, 2006, makes this mistake, the prototype, posted while guest blogging for Larry Lessig in the summer of 2005, states "Are markets likely to do better than group averages? The simplest answer is yes, because participants have strong incentives to be right, and won’t participate unless they think they have something to gain." For some reason, this wisdom seems to have been lost twixt draft and final copy - hopefully my post will bring it back.

Share this