PLoS ONE (Jan 2017)

Sample size calculation to externally validate scoring systems based on logistic regression models.

  • Antonio Palazón-Bru,
  • David Manuel Folgado-de la Rosa,
  • Ernesto Cortés-Castell,
  • María Teresa López-Cascales,
  • Vicente Francisco Gil-Guillén

DOI
https://doi.org/10.1371/journal.pone.0176726
Journal volume & issue
Vol. 12, no. 5
p. e0176726

Abstract

Read online

A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence). Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index) were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.