pb > c

In this article, we find a possible justification of bias (yes, justification, not merely explanation). From the article:

whenever the cost of believing a false pattern is real is less than the cost of not believing a real pattern, natural selection will favor patternicity. They begin with the formula pb > c, where a belief may be held when the cost (c) of doing so is less than the probability (p) of the benefit (b). For example, believing that the rustle in the grass is a dangerous predator when it is only the wind does not cost much, but believing that a dangerous predator is the wind may cost an animal its life.

(actually, I believe the above should read: when the cost (c) of doing so is less than the probability (p) times the benefit (b))

This suggests a justification for belief which can differ from person to person, depending on their goals, because their goals affect the costs and benefits. Rationally, we should do whatever maximizes our expected benefit - and this means that in particular we should believe whatever maximizes our expected benefit. This makes sense, I think. We do indeed need to weigh false positives against false negatives. How can we weigh them, except on the basis of maximization of benefit to ourselves?

If Abe favors markets and Ben favors socialism and if Anthropogenic Global Warming (AGW) is false, then the cost to Abe of the false belief that AGW is true is greater than the cost to Ben. This false belief will increase the scope of socialism within the mixed economy, which is a greater cost to Abe than it is to Ben.

And the same with the reverse: if Abe as above favors markets and Ben favors socialism, then if AGW is true, then the cost to Abe of the false belief that AGW is false is less than the cost to Ben of the false belief that AGW is false, because the false belief will increase the scope of capitalism within the mixed economy.

This is just a preliminary look at the idea of pb>c - my particular application here may be half-baked but I think there's something here.

Just to be clear: this analysis does not recommend believing a knowable falsehood. Maybe it potentially could, but to expand on the rustle in the grass example: if the rustle in the grass is wind, then it is better to believe that it is wind than a predator. And if the rustle is a predator, then it is better to believe that it is a predator than the wind. So, whatever the truth happens to be, it is better to believe the truth than to believe a falsehood. But if the only evidence available is the rustle in the grass, which might be a predator and might be the wind, then whether it makes sense to treat it as (and therefore to believe that it is) the wind, or as a predator, or (a third so-far unmentioned option) to suspend judgment, depends on probabilities and costs and benefits.

Share this

Beliefs are beliefs and

Beliefs are beliefs and should not depend on our goals. How we act on those belief is different. If I see the grass moving, my belief will be a distribution, assigning a high probability of wind and a low probability of predator. I will however choose to flee because of the cost analysis, it does not mean I believe there is a predator.

When I bet on John McCain's victory (at 1:7), some people were puzzled, they asked me, "Oh you think he's going to win." and I answered "No, I think he's going to lose". You're making their mistake.

But evolution might not work like that

Instead of having a true probability distribution and you acting on costs and benefits...

...evolution might decide it's better if your probability distribution is skewed toward fallacy.

At the very least, it might decide it's better if your probability distribution is skewed falsely toward predator than toward wind.

There's no point in skewing

There's no point in skewing the probability distribution, per se. The original point was to justify bias. True evolution might prefer a cautious bias to a reckless bias, but it will not prefer a cautious bias to a lack of bias.

Resources resources

Beliefs are beliefs and should not depend on our goals.

So one might have thought. But an animal (and that includes us most of the time) is typically not able to entertain in its mind multiple possibilities all with probabilities attached. It has limited resources and does not have the luxury to do this. Instead what the animal does (and this includes us), is, after some initial decision making during which it implicitly (though I doubt explicitly) performs the cost analysis, it commits itself fully to a belief. The animal acts as if one of the possibilities were true, and this action includes not only the activity of the muscles, but the activity of the nervous system, so that there is no part of the animal left over to entertaining the alternative hypothesis. (I am, of course, speculating, but I'm pretty confident that this is roughly correct.)

The question, then, is when, precisely, should an animal make the switch, from one belief to another, or from suspension of judgment to belief. The answer depends on the costs and benefits, and the costs and benefits depend on the goals.

If I see the grass moving, my belief will be a distribution, assigning a high probability of wind and a low probability of predator. I will however choose to flee because of the cost analysis, it does not mean I believe there is a predator.

Ideally you might do this if you had infinite computational resources. But you do not. And so you must make do with what you have. Of course, if you do have enough resources, then sure, you might hold in your mind a distribution and do a cost analysis.

When I bet on John McCain's victory (at 1:7), some people were puzzled, they asked me, "Oh you think he's going to win." and I answered "No, I think he's going to lose". You're making their mistake.

I am sure this is literally true, but you were not an animal possibly being stalked in the jungle. You had plenty of time to think about it, so you had the luxury of using the resources necessary literally to implement what you describe (contemplate multiple possibilities, assign probabilities, calculate best action).

I agree. Resources matter

I agree. Resources matter and probability distribution need to be compressed and approximated. The approximation chosen depends on goals. It makes sense to forget part of the distribution which do no matter to me.

For simple organisms there is not even belief, reaction is directly tied to input. Belief requires a mapping of the world and thus consciousness. I think most animals who would hold belief are capable to keep track of various hypothesis, even after committing to one scenario, but even humans have trouble, cognitive dissonance is an example.

However, all else equal this does not justify holding biased beliefs. It's just that we're doomed to have biased belief, on the ideal bias depends on our goals.

Not sure where to start

You'll notice that the material I was quoting didn't talk about the assignment of probability distributions to sets of propositions. Instead, he talked about believing things that are not certain. And his way of talking about it is not idiosyncratic. It is common. You are employing an idea of belief - and an idea of the justification of belief - which is at least not usual.

None of this would matter if your ideas about belief and about the justification of belief unquestionably obsoleted common ideas. But while the assignment of probability distributions is a fine notion - I won't argue that you mustn't do it, nor would I argue that it is never done (it is certainly done by many sophisticated gamblers) - the common notions of belief and justification that I am employing are not thereby shown to be obsolete.

Specifically, in my view, as evidence gathers for some possibility (one scenario), at some point a mental switch is flipped and uncertainty gives way to belief. You, yourself, sort of recognize this moment - which should be familiar from introspection - as "committing to one scenario", though you apply the idea of commitment only to outward, muscular action. I apply the idea of commitment all the way through the body, including the nervous system. Just as a person can commit his visible behavior to one scenario - engaging in behavior which relies on that scenario to be true - so can he commit his neural activity to one scenario. Specifically, he can commit his belief to that scenario - i.e., he can believe that scenario. I see the brain and mind as part of action - as much a part as the muscles. Without the brain and the mind the muscles would do nothing very interesting. Belief is part of action; belief is the keystone of action. Plenty of self-help books recognize the essential importance of this point, and it's right. Action is not decoupled from belief. Sure, as a possibility, action can go forward with a mere semblance of belief guiding it (e.g. what actors on a stage do), but trying to fake your way through when you don't really believe not only requires more mental energy but reduces your probability of success. It is a kind of lie, and lies are very hard to sustain. As for the simpler animals, which have not developed the skill of faking a belief and which simply act on their beliefs, there is no question of not-really-believing-X-but-still-acting-as-if-one-does.

In any case, it's something people do - i.e., once the evidence has mounted sufficiently for one scenario, they commit not only their outward behavior but also their belief to that scenario.

And, furthermore, it is common to consider a belief to be justified if the evidence for it is sufficiently strong. The evidence does not have to give absolute certainty. (Sometimes this is called probable knowledge. Bertrand Russell: "We have to accept merely probable knowledge in daily life...") In fact, it rarely does give absolute certainty - and, I think, it actually never does. Even the truths of mathematics are known to us only through proofs, and it is always possible - improbable, but possible - that there is a flaw in any given proof which no one has yet noticed.

So nothing is certain, and yet it is common to consider beliefs to be justified. You may answer, "bah, that is folk psychology, and is wrong." But I offer an account of justification which allows a person's belief to be justified even if it is not certain. I've already given this account: "Rationally, we should do whatever maximizes our expected benefit - and this means that in particular we should believe whatever maximizes our expected benefit." By now I've explained why it is that I consider believing to be part of doing, and therefore how it is that I am ready to make the inference from what we should do to what we should believe.

My justification of belief in the merely probable yields, as an unavoidable (but not, in my eyes, unwelcome) additional consequence, the possibility of the threshold of justified belief differing between different people depending on their goals. I don't think these two things are really separate. That is to say that, whether or not people recognize it, once we accept the legitimacy of belief in the merely probable, then logically we must also accept the legitimacy of a dependency of belief on goals - because it is the goals that define the thresholds of justified belief in the merely probable.

So there you are.

This is a great post

I don't know if this rustle-in-grass example has been stated before elsewhere, but I've used it myself on other message boards. It is a reasonable justification, or at the very least, explanation of the mind's ability to system-build.

What do I mean by system-building? I mean the tendency to pull together disparate observations into a unified theory.

One example might be Murray Rothbard's grand unified theory of libertarianism. He pulled together:

* Misesian praxeology
* Randian natural-rights
* WWII revision and a strong antiwar stance

At various times, he also fit all these within the "right" or "left" wing system.

When I first read his grand unified system, I thought to myself, "Oh yeah, they fit together nicely. What a smart guy that Rothbard was." Now I think they have nothing to do with each other, and are themselves questionable to various degrees.

-----------------------

Getting back to the larger point: I believe this systematizing tendency is not talked about enough as a property of human nature. The ideas that humans are self-interested, are status-seeking, have greater affinity for their kin, and others, are pretty popular. But the idea that humans have a systematizing bias is rarely talked about.

From my observations of people, I believe that there is a deep longing within human nature to find an ethos. Without it, they feel empty. Once this ethos is found, they'll stretch observations to fit into this ethos. They'll sacrifice the truth in favor of this ethos. And it all goes back to the rustle-in-grass.

Ethos or Suicide?

"Rationally, we should do whatever maximizes our expected benefit - and this means that in particular we should believe whatever maximizes our expected benefit." Constant

“Once this ethos is found, they'll stretch observations to fit into this ethos. They'll sacrifice the truth in favor of this ethos. And it all goes back to the rustle-in-grass.” Jonathan

I agree but what needs to be emphasized is that the results of this ethos may bring great harm and misery ,even as it brings supposedly even greater intangible benefits. A different example than “rustle in the grass” which I shall call the “Honor your ancestor, join him, policy.” I was watching a movie in which a cholera epidemic in China was being aggravated by the custom of burying the dead relative upstream of the water supply near the river because the ancestor’s spirit must have access to fresh water. It was “rational” to bury the person near the water because of the expected benefits.

From an evolutionary perspective this type of behavior must outweigh the costs in most cases. What benefits? I leave it for you to speculate, but perhaps there is a strong survival benefit to the mere fact of group unity and conformity, or ethos, as you say. This drives those of us who support “ truth, freedom, equality and justice” insane. Sometimes it is better just to be wise to this than to expect to change it.

Dave

Human nature vs. math of economics?

I think the human mind works by first analyzing the probability that the data is correct and then the probability of the effect on the person. Is the sky really falling and if it is falling will it damage me?

But the human mind tends to turn negative data statements into positive statements. This is known to the advertising industry. Especially the ciggy butt industry. An ad which reads, "Don't start to smoke" will be remembered subconsciously as "start to smoke."

Just don't

don't Give me money, and lots of it.

The other part of the problem

is that people have forgotten the difference between "prejudice" and "discrimination."