PLoS Medicine (Jan 2012)

Reporting and methods in clinical prediction research: a systematic review.

  • Walter Bouwmeester,
  • Nicolaas P A Zuithoff,
  • Susan Mallett,
  • Mirjam I Geerlings,
  • Yvonne Vergouwe,
  • Ewout W Steyerberg,
  • Douglas G Altman,
  • Karel G M Moons

DOI
https://doi.org/10.1371/journal.pmed.1001221
Journal volume & issue
Vol. 9, no. 5
pp. 1 – 12

Abstract

Read online

BackgroundWe investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.Methods and findingsWe used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of pConclusionsThe majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.