HKS Authors

See citation below for complete author information.

Abstract

Public opinion researchers, campaigns, and political scientists often rely on self-predicted vote to measure political engagement, allocate resources, and forecast turnout. Despite its importance, little research has examined the accuracy of self-predicted vote responses. Seven pre-election surveys with post-election vote validation from three elections (N = 29,403) reveal several patterns. First, many self-predicted voters do not actually vote (flake-out). Second, many self-predicted nonvoters do actually vote (flake-in). This is the first robust measurement of flake-in. Third, actual voting is more accurately predicted by past voting (from voter file or recalled) than by self-predicted voting. Finally, self-predicted voters differ from actual voters demographically. Actual voters are more likely to be white (and not black), older, and partisan than actual nonvoters (i.e., participatory bias), but self-predicted voters and self-predicted nonvoters do not differ much. Vote self-prediction is “biased” in that it misleadingly suggests that there is no participatory bias.

Citation

Rogers, Todd, and Masa Aida. "Vote Self-Prediction Hardly Predicts Who Will Vote, And Is (Misleadingly) Unbiased." HKS Faculty Research Working Paper Series RWP13-010, April 2013.