Schneier on terror scares

Schneier has a worthwhile entry on the false alarms that are plaguing the home front in the war on terror. He presents a straightforward theory of decision-making in a bureaucracy which I think has application outside this particular issue:

Watch how it happens. Someone sees something, so he says something. The person he says it to -- a policeman, a security guard, a flight attendant -- now faces a choice: ignore or escalate. Even though he may believe that it's a false alarm, it's not in his best interests to dismiss the threat. If he's wrong, it'll cost him his career. But if he escalates, he'll be praised for "doing his job" and the cost will be borne by others. So he escalates. And the person he escalates to also escalates, in a series of CYA decisions. And before we're done, innocent people have been arrested, airports have been evacuated, and hundreds of police hours have been wasted.

This story has been repeated endlessly, both in the U.S. and in other countries. Someone -- these are all real -- notices a funny smell, or some white powder, or two people passing an envelope, or a dark-skinned man leaving boxes at the curb, or a cell phone in an airplane seat; the police cordon off the area, make arrests, and/or evacuate airplanes; and in the end the cause of the alarm is revealed as a pot of Thai chili sauce, or flour, or a utility bill, or an English professor recycling, or a cell phone in an airplane seat.

Of course, by then it's too late for the authorities to admit that they made a mistake and overreacted, that a sane voice of reason at some level should have prevailed. What follows is the parade of police and elected officials praising each other for doing a great job, and prosecuting the poor victim -- the person who was different in the first place -- for having the temerity to try to trick them.

Some points are worth making.

There is a disconnect between intention and reality. The intention is that an optimal balance will be struck between false positives (terror scares that turn out to be nothing) and false negatives (ignored reports of suspicious activity that turns out to be genuine terrorism). The reality is that the political environment creates incentives which hardly encourage the agency in charge to strike such a balance.

Case number 1: if there is a report of a suspicious activity which is ignored but which turns out to be real terrorism, then heads will roll. It does not matter that it was reasonable to ignore the report because it was like so many other reports. The public will not be thoughtful about this, and even if they are thoughtful, any after-the-fact consideration of the matter will be plagued by hindsight bias. They will blame the agency for not following up on the report. And there will be a particular individual within the agency who chose not to escalate the report. So blame will have a strong tendency to find the person who chose not to escalate the report.

Case number 2: if there is a report of a suspicious activity, and this is followed up, and eventually leads to a a pointless scare and to embarrassment for the agency as a whole, individuals within the agency who were involved in escalating the false positive will tend to be protected from repercussions, for the simple reason that they will not be alone. Everyone in the chain of escalation will be equally guilty of participating in the escalation. No one will want to blame one particular guy for escalating the false report, because if they do that, then logically they'll need to blame everyone else who okayed the escalation, and some of these people will be high up within the organization. There will be a strong incentive, then, to place the blame entirely outside the organization - most probably on the individual whose suspicious activity inspired the false report in the first place.

These two cases suggest that there is a powerful incentive to escalate every report, and scant incentive to carefully consider the report and to optimize between false positives and false negatives.

Additionally, it is simply much easier, much less work, to escalate every report without considering it carefully, than to take the time and make the effort to consider it carefully. This only compounds the incentive to escalate reports unthinkingly. This is sheer laziness. But unless there is an incentive not to be lazy, people will be lazy. And Case 1 and Case 2 taken together suggest that thoughtful handling may actually be punished, which does the opposite of providing an incentive not to be lazy.

I think Schneier is overoptimistic about the possibility of fixing the situation. As I have tried to show, the bad incentives have their origin in the public reaction, combined with the logic of the situation. Regardless of how the responsible agency operates, the public will still be plagued by hindsight bias and will still look for scapegoats if there is a terror attack and reports were ignored, however reasonably. And regardless of how the responsible agency operates, a false positive, i.e. a terror scare that turns out to be nothing, will inevitably have many fingerprints on it and so there will be every incentive not to point fingers at any individual within the agency. Schneier writes:

But these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. [...]

Equally important, politicians need to stop praising and promoting the officers who get it wrong. And everyone needs to stop castigating, and prosecuting, the victims just because they embarrassed the police by their innocence.

He is asking people to act against the incentives that they're faced with. Exhortation to act against one's incentives is not a real solution. A real solution will take into account what the incentives are. (No, I don't have a real solution.)

Share this

A possible solution - but

A possible solution - but one that would hardly be popular - would be to determine a set of objective criterions that replace the decision making in escalating the issue. If a set of criteria is not met then the issue is not escalated. A cell phone on an airplane seat wouldn't do. A policeman who fails to escalate an issue could protect himself by saying he was following the strict guideline. A policeman who escalate an issue which turns out to be nothing would be punished if he did so outside of the guidelines. This reverses his incentives.


I think that it would be hard to come up with a guideline that (a) truly captured a large enough fraction of innocent possibilities to have a significant effect and did so with no ambiguity, no room for interpretation (which would be a clear opening for criticism), and at the same time (b) really could be memorized by everyone, or even anyone, involved. And I think that there are other considerations, such as that, for this to work, the public would have to be sufficiently thoughtful to say, "oh, the guideline said not to escalate, well, that's okay then", and I don't see them being that way. This is the same public that year in and year out votes in a government that continuously violates the basic principles of economics, nay, of reason. And even if both hurdles mentioned above could be passed, the airport experience suggests that government is incapable of issuing sensible guidelines. The list of confiscated items suggests an institution that is not open to carefully weighing the pros and cons in drawing up a list. So, as you say, it would not be popular.

I was actually about to add a point to the entry but I'll add it here. I don't think that Schneier has actually empirically demonstrated that an optimal balance has not been struck between false positives and false negatives. True, he lists some false positives, but a priori we don't know whether that number of false positives is appropriate. In addition, it's hard for us to escape hindsight bias when assessing the false positives. What seems obviously innocent now, was not necessarily sufficiently obviously innocent before the outcome was known.

I think that Schneier's argument carries weight primarily because of the theory he offers of how members of a bureaucracy can be predicted to act, given the incentives they would be likely to face.