PLoS ONE (Jan 2021)

Explainable models for forecasting the emergence of political instability.

  • Emma Baillie,
  • Piers D L Howe,
  • Andrew Perfors,
  • Tim Miller,
  • Yoshihisa Kashima,
  • Andreas Beger

DOI
https://doi.org/10.1371/journal.pone.0254350
Journal volume & issue
Vol. 16, no. 7
p. e0254350

Abstract

Read online

Building on previous research on the use of macroeconomic factors for conflict prediction and using data on political instability provided by the Political Instability Task Force, this article proposes two minimal forecasting models of political instability optimised to have the greatest possible predictive power for one-year and two-year event horizons, while still making predictions that are fully explainable. Both models employ logistic regression and use just three predictors: polity code (a measure of government type), infant mortality, and years of stability (i.e., years since the last instability event). These models make predictions for 176 countries on a country-year basis and achieve AUPRC's of 0.108 and 0.115 for the one-year and two-year models respectively. They use public data with ongoing availability so are readily reproducible. They use Monte Carlo simulations to construct confidence intervals for their predictions and are validated by testing their predictions for a set of reference years separate from the set of reference years used to train them. This validation shows that the models are not overfitted but suggests that some of the previous models in the literature may have been. The models developed in this article are able to explain their predictions by showing, for a given prediction, which predictors were the most influential and by using counterfactuals to show how the predictions would have been altered had these predictors taken different values. These models are compared to models created by lasso regression and it is shown that they have at least as good predictive power but that their predictions can be more readily explained. Because policy makers are more likely to be influenced by models whose predictions can explained, the more interpretable a model is the more likely it is to influence policy.