Inside view and outside view

At Overcoming Bias, Robin Hanson writes:

Instead of watching fireworks on July 4, I did 1500 piece jigsaw puzzle of fireworks, my first jigsaw in at least ten years. Several times I had the strong impression that I had carefully eliminated every possible place a piece could go, or every possible piece that could go in a place. I was very tempted to conclude that many pieces were missing, or that the box had extra pieces from another puzzle. This wasn't impossible - the puzzle was an open box a relative had done before. And the alternative seemed humiliating.

But I allowed a very different part of my mind, using different considerations, to overrule this judgment; so many extra or missing pieces seemed unlikely. And in the end there was only one missing and no extra pieces. I recall a similar experience when I was learning to program. I would carefully check my program and find no errors, and then when my program wouldn't run I was tempted to suspect compiler or hardware errors. Of course the problem was almost always my fault.

According to Robin Hanson, these illustrate the distinction between the inside and the outside view. To explain this distinction, he quotes "Kahneman and Lovallo's classic '93 paper":

Two distinct modes of forecasting were applied to the same problem in this incident. The inside view of the problem is the one that all participants adopted. An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, and by extrapolating current trends. The outside view is the one that the curriculum expert was encouraged to adopt. It essentially ignores the details of the case at hand, and involves no attempt at detailed forecasting of the future history of he project. Instead, it focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one. The case at hand is also compared to other members of the class, in an attempt to assess its position in the distribution of outcomes for the class. ...

If we consider the programming example, at least part of what seems to be happening is one or both of two things:

1) We are slightly overconfident about the individual statements of the program. That is, we should be 99% sure about each step but we erroneously overestimate the probability of success as 100%. It is a slight overestimate but it adds up. If your program has 100 lines of code, then if you estimate the probability of the correctness of each statement as 100% then you will estimate the probability of the correctness of the whole as 100%; but if you had more realistically estimated each statement as only having a 99% probability of correctness, then you might then have concluded that the program had a significantly lower than 100% probability of being correct.

2) We fail to do the math. Even if we correctly estimate the probability that each statement is correct, we fail to combine the probabilities when estimating the probability that the whole is correct. For example, if the correctness of each of two statements is unrelated to the correctness of the other, and if each one has a 99% probability of being correct, then the two statements only have about a 98% probability of (both) being correct. All it takes is one bad statement to make a program incorrect, so if a program has 100 or 1000 statements then even if each statement is almost certainly correct considered by itself, then the whole could very well have a high probability of being incorrect.

Anyway, this seems to be part of the reason that the "inside view" is error-prone. We are slightly overconfident about each step, and/or we fail to do the math when estimating the probability that all our steps together are right.

Something like this may also apply to the jigsaw puzzle. Some pieces are easier to place than others, and the ones that are easier to place are the ones that are placed first, and the ones that are harder to place are the ones that are placed last. If we take this into consideration, then it should not be surprising that the last pieces left over are very hard to place. We may make at least two errors here. First, we may erroneously assume that all the pieces are equally easy to place. We may fail to consider that there is a range of difficulty. Second, we may underestimate the average difficulty by basing our estimate on the earlier pieces that we placed.

Compounding this problem is that we may not really know what the difficulty distribution is. Estimating the difficulty of placing a valid puzzle piece near the end may be hard to do not only because the math is hard, but because we don't have enough empirical data from which to draw an estimate.

One question which these considerations raise is, what exactly is the relevant distinction between an "outside view" and an "inside view"? Is the relevant distinction that the "outside view" considers the situation as a whole? Or is the relevant distinction that the "outside view" assigns probabilities and does the math? These are two distinctions, and they seem to be combined into one distinction. Consider the description of the outside view:

It essentially ignores the details of the case at hand, and involves no attempt at detailed forecasting of the future history of he project. Instead, it focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one.

There are two elements to this description. One element is that the details are ignored and the situation is considered as a whole. Another element is that the whole is considered as a member of a class and statistical reasoning is applied to the class. One could, however, as I pointed out, pay attention to the details but consider the details as members of classes and apply statistical reasoning to those - combining one aspect of the outside view with one aspect of the inside view - and still produce a correct estimate of uncertainty.

We might want to break the distinction apart into two distinctions (or even three). One distinction concerns what level of detail the matter is considered at. Another distinction concerns whether uncertainty is recognized and statistics are applied or swept under the rug. The second distinction breaks apart into recognizing uncertainty, and applying statistical reasoning.

Why, by the way, should we ever sweep statistics under the rug? I think the obvious answer is: it takes time and effort to estimate uncertainty and apply statistics to something, and in some cases it's just not worth the effort. What may be happening here is that when we consider matters in detail we err on the side of sweeping uncertainty and statistical reasoning under the rug.

Share this

I think the same distinction

I think the same distinction very much applies to financial trading, especially casual or discretionary trading. At some point it happens that you take a step back off your "view" and say: hey, market are efficient.

bayes to the rescue...

what exactly is the relevant distinction between an "outside view" and an "inside view"?

Similar to your suggestions, I think both can be thought of as using probabilities. The inside view uses a prior distribution based on a particular actor's success in similar situations. The outside view uses a prior distribution based on all actor's success on the specific situation.

You hit the nail on the head with combining one aspect of the outside view with one aspect of the inside view - and still produce a correct estimate of uncertainty. Precisely what a good rational Bayesian reasoner should be doing, combining the two to get a new distribution. My instincts tell me that, instead, we use a base-case approach - which would lead one to believe that the 99%/99% situation described above leads to a belief of 99%. We aren't very good at combining probabilities, and even worse at handling conditional probabilities.

This is why theoretical

This is why theoretical study and experience, improve performance in any field. Natural talent is important, but not sufficient.