HKS Authors

See citation below for complete author information.

Thornton Bradshaw Professor of Public Policy, Decision Science, and Management

Abstract

National security is one of many fields where experts make vague probability assessments when evaluating high-stakes decisions. This practice has always been controversial, and it is often justified on the grounds that making probability assessments too precise could bias analysts or decision makers. Yet these claims have rarely been submitted to rigorous testing. In this paper, we specify behavioral concerns about probabilistic precision into falsifiable hypotheses which we evaluate through survey experiments involving national security professionals. Contrary to conventional wisdom, we find that decision makers responding to quantitative probability assessments are less willing to support risky actions and more receptive to gathering additional information. Yet we also find that when respondents estimate probabilities themselves, quantification magnifies overconfidence, particularly among low-performing assessors. These results hone wide-ranging concerns about probabilistic precision into a specific and previously undocumented bias that training may be able to correct.

Citation

Friedman, Jeffrey A., Jennifer S. Lerner, and Richard Zeckhauser. "Behavioral Consequences of Probabilistic Precision: Experimental Evidence from National Security Professionals." International Organization 71.4 (Fall 2017): 803-826.