3

Robust Misspecified Models and Paradigm Shift
Individuals use models to guide decisions, but many models are wrong. This paper studies which misspecified models are likely to persist when individuals also entertain alternative models. Consider an agent who uses her model to learn the relationship between action choices and outcomes. The agent exhibits sticky model switching, captured by a threshold rule such that she switches to an alternative model when it is a sufficiently better fit for the data she observes. The main result provides a characterization of whether a model persists based on two key features that are straightforward to derive from the primitives of the learning environment, namely, the model’s asymptotic accuracy in predicting the equilibrium pattern of observed outcomes and the `tightness’ of the prior around this equilibrium. I show that misspecified models can be robust in that they persist against a wide range of competing models—including the correct model—despite individuals observing an infinite amount of data. Moreover, simple misspecified models with entrenched priors can be even more robust than correctly specified models. I use this characterization to provide a learning foundation for the persistence of systemic biases in two applications. First, in an effort-choice problem, I show that overconfidence in one’s ability is more robust than underconfidence. Second, a simplistic binary view of politics is more robust than the more complex correct view when individuals consume media without fully recognizing reporting bias.