Royal Society Open Science (Oct 2022)

A retrospective assessment of COVID-19 model performance in the USA

  • Kyle J. Colonna,
  • Gabriela F. Nane,
  • Ernani F. Choma,
  • Roger M. Cooke,
  • John S. Evans

DOI
https://doi.org/10.1098/rsos.220021
Journal volume & issue
Vol. 9, no. 10

Abstract

Read online

Coronavirus disease 2019 (COVID-19) forecasts from over 100 models are readily available. However, little published information exists regarding the performance of their uncertainty estimates (i.e. probabilistic performance). To evaluate their probabilistic performance, we employ the classical model (CM), an established method typically used to validate expert opinion. In this analysis, we assess both the predictive and probabilistic performance of COVID-19 forecasting models during 2021. We also compare the performance of aggregated forecasts (i.e. ensembles) based on equal and CM performance-based weights to an established ensemble from the Centers for Disease Control and Prevention (CDC). Our analysis of forecasts of COVID-19 mortality from 22 individual models and three ensembles across 49 states indicates that—(i) good predictive performance does not imply good probabilistic performance, and vice versa; (ii) models often provide tight but inaccurate uncertainty estimates; (iii) most models perform worse than a naive baseline model; (iv) both the CDC and CM performance-weighted ensembles perform well; but (v) while the CDC ensemble was more informative, the CM ensemble was more statistically accurate across states. This study presents a worthwhile method for appropriately assessing the performance of probabilistic forecasts and can potentially improve both public health decision-making and COVID-19 modelling.

Keywords