IQ: The Map Is Not the Territory

I often see people assert that intelligence is normally distributed. This may or may not be true, but the evidence that people generally cite to support this claim—that IQ is normally distributed—is completely irrelevant. IQ is forced into a normal distribution because it's defined that way; this tells us nothing at all about the shape of the underlying intelligence distribution.

Before I elaborate, let me make it clear that the point I'm making is completely independent of the following questions:

  • Whether the idea of a scalar measure of intelligence is even theoretically coherent.
  • Whether it's possible to design tests that mesaure this.
  • Whether the IQ tests currently in use actually measure this.

Assume for the sake of argument that all of these propositions are true. Now here's how IQ tests work: A test measuring a wide variety of cognitive skills is given to many people. Each person gets a raw score, which is essentially the number of questions he answered correctly*. IQ scores are then assigned based on each person's percentile within the raw score distribution (for his age group). The mapping from raw score to IQ is roughly as follows:

Percentile IQ
1% 65
5% 75
10% 81
25% 90
50% 100
75% 110
90% 119
95% 125
99% 135

A more complete table is available here.

In the process of conversion, all information about the underlying distribution of raw scores is lost. The distribution of raw scores may be normal, but it could just as easily be bimodal, log-normal, or any kind of distribution at all. As long as there aren't too many people with exactly the same raw score, any distribution can be transformed into normally distributed IQ scores.

Now consider three people with IQs of 75, 100, and 125—call them Al, Bob, and Chris, respectively. What we can say about their intelligence is that Chris is smarter than Bob, and Bob is smarter than Al, but we can't say anything about how much smarter anyone is than anyone else. The difference in intelligence between Al and Bob is not necessarily equal to the difference between Bob and Chris, even though the difference in IQ is.

So why is it done this way, instead of reporting the raw scores directly? I believe it's because some or all of the assumptions above are incorrect. If Bob gets significantly more questions right than Al does, then we can say with a reasonable degree of confidence that Bob is smarter than Al, but we really can't say how much smarter. The difference in raw scores is highly sensitive to the difficulty of the questions, and we don't know how hard to make the questions to produce a true cardinal metric of intelligence.

In fact, I'm not even sure the first assumption is true. I really can't think of a meaningful interpretation of the sentence "Chris is twice as smart as Al", or even "Chris is as much smarter than Bob as Bob is than Al."

Of course, that's all academic. The important point to take away is that even though IQ may look like a cardinal value, it's really just a tarted up percentile ranking. Why we don't just use percentiles instead of IQ scores, I really don't know. I would think it would make things much clearer, but I guess the IQ scale does make it easier to talk about extreme outliers.

*Test designers may make some questions worth more than one point in order to weight them more heavily, but that's not important here.

Share this

Agreed but you forgot

Agreed but you forgot something. Most quantitative human features (height, skull size, etc) are normally distributed. If you want to look at correlations, it makes sense to fit a normal curve to IQ.

Evolutionary psychology hypothesis

I've heard a hypothesis rooted in evolutionary psych suggesting that men and women may have different distributions of intelligence/behavior characteristics.

That is, an AVERAGE woman is more likely to be able to reproduce than an AVERAGE man (where AVERAGE is measured relative to any number of criteria). That's because women of every quality will seek to mate with above-average men; above-average men will be able to mate with women of every quality. Faced with this dynamic, natural selection provides greater incentives for risk-taking where men are concerned.

This risk-taking need not be limited to a man's choices or behavior; it may also extend to a man's innate qualities. The genes of a mother that reliably produces average male babies will not get passed down beyond the next generation. But the genes of a mother that can produce male babies of variable quality -- even if some are extraordinarily good and some are extraordinarily bad -- are more likely to get passed down. Consequently there may be evolutionary reasons to expect the distribution of all manner of attributes among men to be different than the distribution among women.

Now, maybe these differences would be reflected in a simple increased standard deviation for men than for women. But there's also a possibility that men could have non-standard -- even bi-modal -- distributions.

Now, I generally assume (in the absence of contrary evidence) that any pool of data will tend to form a normal distribution. Since hearing this evolutionary psych theory, however, I'm less inclined to make that assumption regarding anything that might be influenced by gendered natural selection.

Larry Summers

This hypothesis (that there may be different distributions even if the averages are the same) is what got Larry Summers into trouble.

Robert Trivers

That is, an AVERAGE woman is more likely to be able to reproduce than an AVERAGE man (where AVERAGE is measured relative to any number of criteria). That's because women of every quality will seek to mate with above-average men; above-average men will be able to mate with women of every quality. Faced with this dynamic, natural selection provides greater incentives for risk-taking where men are concerned.

A little off-topic, but this may interest you as well:

http://en.wikipedia.org/wiki/Parent-offspring_conflict

Robert Trivers is still pounding out some nice ideas.

All this theory tells us is

All this theory tells us is that the variance of the male distribution should be higher than the variance of the female distribution. It doesn't tell us anything about the shapes of the distributions, other than that they may be different and one should be wider.

Well not even that.

Well not even that. Increased variance make sense but that's not the only way to alter the distribution so that your son gets the most women.

The funny thing is that for two distributions, F and G, the relation
R(F,G) = sum 1_(X > Y)dF(X)dg(Y), is not transitive.

Let's say that two women have two different distributions for drawing males. They both have a male, only one of them gets the girls and produces grandkids. Obviously one woman will have a "better" distribution of offspring than the other, but "better" isn't transitive.

For a better explanation, see the non transitive dices.

It's a red queen race on a merry go round.

Scalar vs Ordinal

Arthur, you're right that many objective measures are normally distributed (more or less). However, you'd agree that "A is 10% taller than B" and "A is taller than 10% more of the population than C is" mean different things, yes? If both statements are true, it does not mean that B and C are the same height.

The problem with IQ (or g, or many other measures) is that the first comparison isn't even defined. One cannot reasonably say "Q is 10% more intelligent than W", so all we have is the second: "Q is more intelligent than 10% more of the population than X is".

If we forget the difference, it's very easy to infer far more from IQ than it really supports.

That's an interesting point.

That's an interesting point. The natural follow-up question: Is this really the distribution that produces the strongest correlations? I wonder if there's a distribution that produces better correlations. Maybe using percentiles for everything?

Nonparametric statistics

Is this really the distribution that produces the strongest correlations?

You can try using nonparametric correlations directly, rather than deriving a parameter from the data as is done for IQ and then using parametric corellations. Since a nonparametric corellation drops a key assumption it's presumably not as powerful, but since it doesn't employ a dubious assumption it may be less dubious.

Is there really an underlying mistake?

I often see people assert that intelligence is normally distributed. This may or may not be true, but the evidence that people generally cite to support this claim—that IQ is normally distributed—is completely irrelevant.

Do you have an example of people making the mistake of treating the distribution as an empirical fact (rather than a matter of definition)? I can easily imagine people asserting that intelligence is normally distributed for very good reason. Even if it's true by definition, it's useful to know the definitions. People who are learning about IQ need a primer, so I would not only expect, but hope that most popular accounts will (because they need to bring their readers up to speed) assert the basics - including the point that IQ is normally distributed. Now, they may not say the words "by definition", but in a popular account you don't want to cause premature boredom in 99% of your readers in order to satisfy the 1% that is ready to split hairs about stuff that doesn't matter, and whether something is an empirical fact, or a consequence of definition, usually doesn't matter inside the scope of a given article.

Analogy: someone who is trying to estimate how much paint he'll need to paint a room doesn't need to know whether it is a matter of definition (or mathematical deduction from definitions), or empirical fact, that area of a rectangle is equal to width times height. The calculation works for him either way.

Also, that 1+1=2. If you drop one pebble into a bowl and then drop in another pebble, you can see a demonstration that 1+1=2. But is it an empirical fact that 1+1=2, or have we merely placed the label "2" on the result we typically get when we add 1 pebble to 1 pebble, making it true by definition? For most purposes, it doesn't matter! The calculation works either way.

Why we don't just use percentiles instead of IQ scores, I really don't know.

It becomes cumbersome to talk about intelligence above 99%, or below 1%, if you're using percentiles. While that's a small percentage of the population, these two extremes may be disproportionately important, the upper end because of their contributions, the lower end because of their needs.

Also, following on the point that Arthur raised about other phenomena, intelligence may, in fact, be normally distributed. If so, then deriving a parameter so as to produce a normal distribution may be a good first step toward revealing the underlying real differences. Of course, this does not tell you what standard deviation to assign (as a proportion of distance of median intelligence from 0).

It's also interesting than

It's also interesting than many otherwise rational transhumanist are quick to jump at the conclusion that an artificial intelligence would become order of magnitude more intelligent than a human being. With the right transformation on an underlying variable, any difference can be made to be "order of magnitudes". I am not aware of any good quantitative measure of intelligence.

Non-canonical

I really can't think of a meaningful interpretation of the sentence "Chris is twice as smart as Al", or even "Chris is as much smarter than Bob as Bob is than Al."

I take you to mean that you can't think of a canonical interpretation, a "one true" interpretation. But there are ways to do it, if you don't insist that the result be the "one and only" measure of intelligence. And I don't see why it is really so important.

Most basically, sheer speed in solving the same set of problems will immediately allow you to talk about "twice" and "as much as". If the ratios depend on the problems - fine, keep that in mind. After all, if Chris is twice as fast as Al on one task, but only just as fast as Al on another task, both these facts may be important to you.

Of course, another aspect of intelligence is ability/inability to solve a problem regardless of timing. If Al can solve a problem but Chris can't solve the same problem no matter how much time you give him to do it, that's not a difference in speed but a different sort of difference. But this can also be measured in ways that assign cardinal values, by measuring the problems themselves, and assigning, say, a "complexity" measure to each problem. This need not be arbitrary. For example, one measure of complexity is how many things must be tracked (kept in the head) simultaneously in order for the solution to be worked out. (And of course, memory can be directly measured - a common measure is to rattle off a string of numbers and then ask the subject to repeat the string.) The length of the longest string that can be remembered has a cardinal value.

Another measure of intelligence is long term retention/recall, which can be assigned a cardinal value in obvious ways.

Sure, you may get a set of different ratios - e.g. Chris is twice as smart as Al in this one way, but only as smart as Al in this other way. But so what? If the mental abilities really are so varied in their cardinal ratios, that's important to know.

whatever IQ measures . . . .

The point "The Bell Curve" was trying to make is that whatever IQ measures, it seems to be predictive of social/economic success in our environment.

Second, statistics applies to groups, never to individuals. Most persons with higher IQs will be more successful in the world and less likely to end up in the slammer.

intelligence and iq are not interchangeable

I think you're using the term IQ and intelligence interchangeably as though they are the same thing. One definition of intelligence from Webster, "the ability to learn or understand or to deal with new or trying situation". Everyone's life experiences are different. If you were to take someone with a low IQ score, another with a high IQ score, and invert their life experiences ... does anyone even remotely suggest that on average, both would retain their same IQ scores? On the contrary, averaged out the IQ's would totally reverse.

Therefore, the question becomes ... does intelligence exactly mean life experience? I do not think it does. Each person (the one with low IQ, the one with high IQ) each learned according to the environment they were exposed to through their life. The fact that the IQ's would reverse if their life experiences reversed, mean that both were of equal intelligence, because each would receive the exact same increase in IQ when subjected to the same learning experiences.

Now as it happens, those with better economic means are more likely to spend time pursuing academia. Those with lesser means pursue conventional hands-on type of jobs (or perhaps specialized job training, like a carpenter/electrician). Who writes so-called "intelligence" tests? Academics. This woman believes -

that intelligence tests are nothing more than tests that predict success or failure in the school system from which the questions have been derived. Nothing would be said about the relationship between scores on these tests and intelligence.

What would happen if you gave an academic a specialized test covering all aspects of house design, from plumbing, to electrical, all aspects of construction and design, tiling, etc. and pit their scores against a master carpenter with certifications in each of those areas of specialty? That is what would happen if intelligence tests were written by master carpenters instead of academics. My argument is that neither academic test nor carpentry test are any more indicative of intelligence, or lack thereof.

@bilwald:"The point "The Bell Curve" was trying to make is that whatever IQ measures, it seems to be predictive of social/economic success in our environment."

The pursuit of academia, to the extent the pursuit resembles learning factoids about history and geology, math, certainly makes one more capable of passing an IQ test with high scores easier. To pursue academia takes alot of cash, however. Those that started with lots of cash, from affluent families, are more likely to be more prosperous - though it could be argued that has nothing to do with intelligence (it is more likely to do with these high IQ'ers witnessing and experiencing first-hand how to succeed in life from their affluent families, as well as getting a starting boost as the saying goes, "it takes money to make money").

Also, people with lots of cash have a lesser need to steal and commit crime, bringing to next point -

@bilwald:"Most persons with higher IQs will be more successful in the world and less likely to end up in the slammer."

People who score high on IQ tests are also more likely to perform white collar crimes. On June 29, 2009, Bernard Madoff will be sentenced on having stolen 65 billion in a ponzi scheme (albeit part of this amount was due to fabricated gains). 65 billion dollars - that is the equivelant of $212 from every man, woman and child in the US with a population of 306 million. If we had to total up negative affects on the economy and people in general, white collar crime would be far more devastating financially. And his ponzi scheme went on for decades! It also wouldn't be hard to argue that white collar crime is less detectable, so if everyone was thrown into the slammer who should be thrown in, white collars would give blue collars a run for their money (and the Clintons would be first in line).

My whole reason for making this response ... it seems to me the gist of this topic is there are a group of people who are "smart", and this group of smart people tend to commit less crime, flourish socially and have better jobs.

When in reality, nobody's job is safe under the current world economic crisis, I would rather be a skilled carpenter than an executive in a large company that is downsizing right now. People are people, with sinful natures, and those with high IQ's may just be a little better at covering their tracks, but when they are caught the extent of their crimes can be mind-boggling. Perhaps they don't commit crime because they don't need to, they already have money - because they were affluent enough to pursue college in the first place to learn how to pass IQ tests (though that is not attributable to intelligence more than it is from being from an affluent family). It is a given that someone that comes from a family that is affluent enough to send them to college in pursuit of academia wouldn't have as much of an inclination to steal (but what does that have to do with intelligence?).

There is a notable gap between IQ's between blacks and whites of 15 points. Does that mean that whites are smarter than blacks? No. It means whites in general are affluent enough to provide an environment where white children excel at academia. That has nothing to do with "the ability to learn or understand or to deal with new or trying situation".

You've got a single parent, you're the oldest sibling of five. There's barely enough food on the table and bills aren't getting paid. You can go to work at a grocery store and pay the bills and feed your younger brothers and sisters, or you can pursue academia with the frivolous pursuit of factoids ... working a job that barely pays for college between the loans for the next four years before you get to compete with half of corporate America who just got the boot in being downsized, trying to find a job. This is the "trying situation", which is the more intelligent thing to do?

Ending this with one of my more favorite quotes ...

Question to Stephen Hawking: What is your I.Q.?

Stephen Hawking: I have no idea. People who boast about their I.Q. are losers.

A professor of women's studies needs an IQ of at least 95

To become a full professor in some fields of academia (women's studies, queer studies, ethnic studies etc) it is more important to be ideologically untainted, and an expert in the circle jerk of dogma, than to be intelligent. Such job opportunities for the unintelligent have also opened up in other academic departments such as the modern languages, many departments of the social sciences, and politically weighted sciences such as climate science.

Jack, a truly intelligent and insightful carpenter's version of an IQ test would have a lot in common with current IQ tests. A skills proficiency test is not an intelligence test. But it is possible to create highly g-loaded questions in the language of carpentry. There would be no guarantee of high scores for even the best of carpenters, however.

IQ tests are indeed predictive of academic and life success. But not perfectly predictive. Tests of executive function are better predictors of life success than IQ tests.

A high IQ can be an advantage or a disadvantage, just as above average height or good looks can be advantages or disadvantages. If the person cannot deal with his "advantages" they become disadvantages. Most people of high IQ are able to integrate those skills into their overall approach to life, but some cannot.

Men vs women? There are a lot more male super-geniuses than female, which may explain the Nobel Prize in the sciences discrepancy and the Fields Medal in math discrepancy. A lot more male dummies than female also. It may balance out.