Liveblogging: Kahnemann talk

One of the great things about working at Google is the continuous flow of cool speakers, just like at a University. Today, we are graced by Nobel Laureate Daniel Kahneman, speaking on The Marvels and the Flaws of Intuition. Naturally, what with Google being one of the largest technical companies in the world, we began with technical problems. (The difficulties of making mics, A/V, and remote teleconferencing go smoothly are clearly severe, given the importance of the problem and the quality of people who fail at it).

Amusingly, someone at a remote site just held up a sign which said "We can't hear you - Seattle". Ok, after 15 min, it starts, with Prof K commenting "This is what happens when someone from the ivory tower comes to the temple of technology". He is talking about intuitive thinking - his original line was criticizing intuition, but now there are people defending and exploring the advantages of intuition. There are two ways that thoughts come to mind. One is 17 * 24 - 408, a sequential computation governed by rules. For the second, he shows a picture of an angry woman. The knowledge that she is angry comes quickly, and we don't necessarily know why we think it.

There are conditions for developing good intuition. One is lots of practice, with rapid, unequivocal, correct feedback to learn their patterns. Chess players get this, but radiologists do not. Reliable signals are necessary for emotional learnings.

On the other hand, intuition has lots of error and overconfidence. People weight information inefficiently (interviews), and neglect base rate information (diagnosis). If you set up a competition between a human and an algorithm (clinical vs. statistical prediction), the algorithm almost always wins. Even if the human has more information. One main reason it wins is that the algorithm is consistent - same output for same input. Subjective judgement is useful, but it is best used as an input into an algorithm. Judgements contain information, but people are bad at integrating all the information to reach a final conclusion, algorithms do better.

Despite this poor performance, experts think that they can do a good job, and somehow they make errors again and again and never learn that they are wrong. This appears to be due to various human mechanisms, for example, 20/20 hindsight, exaggerating their previous estimate of the chance of what happened happening, and convincing themselves that their own prediction almost happened. Note how the earlier criteria (immediate, unequivocal feebdack) help short-circuit these mechanisms so that people can actually learn.

Talks about the two types of thinking: intuition (System 1) vs. reasoning (System 2). Fast/Slow, Parallel/Serial, Automatic/Controlled, Associative/Rule-governed, slow-learning/fast-learning, emotional/neutral. Examples of intuitive assessments are properties of objects. System 1 assessments you can do while making a left turn while driving. An example of a bad intuitive assessment is that if you offer two insurance policies for the same price at an airport:

(A) Pays $50K in case of death due to terrorism
(B) Pays $50K in case of death due to any reason

If you sell them next to each other, (B) sells more, because reasoning tells the person it is better. But if you sell them separately, (A) sells more - because death by terrorism is more real and scary than abstract death. This is a very important distinction for experiments, because people may react very differently to things together than they will apart, because they will explicitly compare things together, which is a reasoning judgement, and has less errors/biases.

One of the key operations of System 2 (Reasoning) is monitoring System 1.

Example: gives the "Bat-and-ball" problem. "A bat and a ball together costs $1.10, the bat costs 1$ more than the ball, how much does the ball cost?". There is an obvious wrong answer: ten cents. And even for those people who get the right answer, the wrong answer jumps into their mind first. But the people who *say* the wrong answer, are telling us something important - they don't check their work. Their System 2 has failed to monitor System 1. This is fascinating, because this question is part of the 3-question IQ test, which correlates significantly with other measures of IQ. This suggests that not only is intelligence related to the ability to do explicit reasoning, but the ability to restrain and monitor one's intuition. This suggests a hypothesis - IQ is correlated with self-control.

What is the difference between intuition and perception? Both meet many of the System 1 criteria. The difference is that perception is about the here and now, but intuition takes the same mechanism and operates on hypotheticals.

If you ask students, (1) How happy are you? (2) How many dates did you have last month?, the correlation is about zero (-0.12, not significant). If you ask it in the other order, the correlation is 0.66.

The average is intuitive, the total is not. If you subject people to pain, say in a medical procedure, and ask them during the procedure how much it hurts, and ask them afterwards, their answers will be based on the average, not the integral. If you give people 2 experiences, both with the same first 60 second pain profile, and one with another 30 seconds of diminishing pain, they will say they'd rather repeat the longer experience. Similarly, if you ask how much 8 dinner plates, 8 salad bowls, and 8 dessert plates would sell for, vs. the same set plus 4 cups, and 4 broken cups, and 6 saucers, and 2 broken saucers, they will say the latter should sell for less. In both cases, the average is the statistic we intuit, and so that is what guides our decisions.

Share this

Fascinating, Patri.

Fascinating, Patri.

Radiologists suck.

But if you sell them

But if you sell them separately, (A) sells more - because death by terrorism is more real and scary than abstract death.

Is this made up, the result of an experiment in which subjects pretend to buy insurance, or is terrorism-only life insurance actually being sold to very stupid people?

It looks like the first

It looks like the first result here might be the original paper:

http://scholar.google.com/scholar?hl=en&lr=&q=framing+insurance+decisions&btnG=Search

but it's not free.

Why bother with terrorism

Why bother with terrorism insurance when the government gave money to the families of people killed on September 11th anyway?

An example of a bad

An example of a bad intuitive assessment is that if you offer two insurance policies for the same price at an airport:

(A) Pays $50K in case of death due to terrorism
(B) Pays $50K in case of death due to any reason

If you sell them next to each other, (B) sells more, because reasoning tells the person it is better. But if you sell them separately, (A) sells more - because death by terrorism is more real and scary than abstract death.

I have a problem with a lot of these experiments and this illustrates the point. People extract more information from the environment than the experimenter may realize.

If I'm walking along and someone offers me protection from terrorism, one inference I may make from this is that terrorism is a problem. I may buy the protection.

If I'm walking along and someone offers me protection from whatever, I draw no inferences, because I get no particular information from that about possible threats.

Therefore it can be perfectly rational and logical for me, in those situations, to pay more for protection from terrorism than for protection from whatever.

If someone offers me, side by side, protection from terrorism and protection from whatever, then the information I glean from this is the same whatever I chose and so I will prefer protection from whatever.

Patri, Google has in the

Patri,

Google has in the past put some of these talks on Google Video. If they're not doing that consistently, can you go bug somebody about that for me? Thanks.

[...] e gone up! /This post

[...] e gone up! /This post totally ripped from the always-interesting Patri Friedman over at Catallarchy. July 16th, [...]