Diagnostic and Prognostic Research (Jun 2023)

Decision curve analysis: confidence intervals and hypothesis testing for net benefit

  • Andrew J. Vickers,
  • Ben Van Claster,
  • Laure Wynants,
  • Ewout W. Steyerberg

DOI
https://doi.org/10.1186/s41512-023-00148-y
Journal volume & issue
Vol. 7, no. 1
pp. 1 – 9

Abstract

Read online

Abstract Background A number of recent papers have proposed methods to calculate confidence intervals and p values for net benefit used in decision curve analysis. These papers are sparse on the rationale for doing so. We aim to assess the relation between sampling variability, inference, and decision-analytic concepts. Methods and results We review the underlying theory of decision analysis. When we are forced into a decision, we should choose the option with the highest expected utility, irrespective of p values or uncertainty. This is in some distinction to traditional hypothesis testing, where a decision such as whether to reject a given hypothesis can be postponed. Application of inference for net benefit would generally be harmful. In particular, insisting that differences in net benefit be statistically significant would dramatically change the criteria by which we consider a prediction model to be of value. We argue instead that uncertainty related to sampling variation for net benefit should be thought of in terms of the value of further research. Decision analysis tells us which decision to make for now, but we may also want to know how much confidence we should have in that decision. If we are insufficiently confident that we are right, further research is warranted. Conclusion Null hypothesis testing or simple consideration of confidence intervals are of questionable value for decision curve analysis, and methods such as value of information analysis or approaches to assess the probability of benefit should be considered instead.