Judgment and Decision Making (Nov 2012)

Using inferred probabilities to measure the accuracy of imprecise forecasts

  • Paul Lehner,
  • Avra Michelson,
  • Leonard Adelman,
  • Anna Goodman

DOI
https://doi.org/10.1017/S1930297500003272
Journal volume & issue
Vol. 7
pp. 728 – 740

Abstract

Read online

Research on forecasting is effectively limited to forecasts that are expressed with clarity; which is to say that the forecasted event must be sufficiently well-defined so that it can be clearly resolved whether or not the event occurred and forecasts certainties are expressed as quantitative probabilities. When forecasts are expressed with clarity, then quantitative measures (scoring rules, calibration, discrimination, etc.) can be used to measure forecast accuracy, which in turn can be used to measure the comparative accuracy of different forecasting methods. Unfortunately most real world forecasts are not expressed clearly. This lack of clarity extends to both the description of the forecast event and to the use of vague language to express forecast certainty. It is thus difficult to assess the accuracy of most real world forecasts, and consequently the accuracy the methods used to generate real world forecasts. This paper addresses this deficiency by presenting an approach to measuring the accuracy of imprecise real world forecasts using the same quantitative metrics routinely used to measure the accuracy of well-defined forecasts. To demonstrate applicability, the inferred probability method is applied to measure the accuracy of forecasts in fourteen documents examining complex political domains.

Keywords