Diagnostic and Prognostic Research (May 2022)

Does poor methodological quality of prediction modeling studies translate to poor model performance? An illustration in traumatic brain injury

  • Isabel R. A. Retel Helmrich,
  • Ana Mikolić,
  • David M. Kent,
  • Hester F. Lingsma,
  • Laure Wynants,
  • Ewout W. Steyerberg,
  • David van Klaveren

DOI
https://doi.org/10.1186/s41512-022-00122-0
Journal volume & issue
Vol. 6, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Background Prediction modeling studies often have methodological limitations, which may compromise model performance in new patients and settings. We aimed to examine the relation between methodological quality of model development studies and their performance at external validation. Methods We systematically searched for externally validated multivariable prediction models that predict functional outcome following moderate or severe traumatic brain injury. Risk of bias and applicability of development studies was assessed with the Prediction model Risk Of Bias Assessment Tool (PROBAST). Each model was rated for its presentation with sufficient detail to be used in practice. Model performance was described in terms of discrimination (AUC), and calibration. Delta AUC (dAUC) was calculated to quantify the percentage change in discrimination between development and validation for all models. Generalized estimation equations (GEE) were used to examine the relation between methodological quality and dAUC while controlling for clustering. Results We included 54 publications, presenting ten development studies of 18 prediction models, and 52 external validation studies, including 245 unique validations. Two development studies (four models) were found to have low risk of bias (RoB). The other eight publications (14 models) showed high or unclear RoB. The median dAUC was positive in low RoB models (dAUC 8%, [IQR − 4% to 21%]) and negative in high RoB models (dAUC − 18%, [IQR − 43% to 2%]). The GEE showed a larger average negative change in discrimination for high RoB models (− 32% (95% CI: − 48 to − 15) and unclear RoB models (− 13% (95% CI: − 16 to − 10)) compared to that seen in low RoB models. Conclusion Lower methodological quality at model development associates with poorer model performance at external validation. Our findings emphasize the importance of adherence to methodological principles and reporting guidelines in prediction modeling studies.

Keywords