BMC Medicine (Jul 2018)

Poor reporting of multivariable prediction model studies: towards a targeted implementation strategy of the TRIPOD statement

  • Pauline Heus,
  • Johanna A. A. G. Damen,
  • Romin Pajouheshnia,
  • Rob J. P. M. Scholten,
  • Johannes B. Reitsma,
  • Gary S. Collins,
  • Douglas G. Altman,
  • Karel G. M. Moons,
  • Lotty Hooft

DOI
https://doi.org/10.1186/s12916-018-1099-2
Journal volume & issue
Vol. 16, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Background As complete reporting is essential to judge the validity and applicability of multivariable prediction models, a guideline for the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) was introduced. We assessed the completeness of reporting of prediction model studies published just before the introduction of the TRIPOD statement, to refine and tailor its implementation strategy. Methods Within each of 37 clinical domains, 10 journals with the highest journal impact factor were selected. A PubMed search was performed to identify prediction model studies published before the launch of TRIPOD in these journals (May 2014). Eligible publications reported on the development or external validation of a multivariable prediction model (either diagnostic or prognostic) or on the incremental value of adding a predictor to an existing model. Results We included 146 publications (84% prognostic), from which we assessed 170 models: 73 (43%) on model development, 43 (25%) on external validation, 33 (19%) on incremental value, and 21 (12%) on combined development and external validation of the same model. Overall, publications adhered to a median of 44% (25th–75th percentile 35–52%) of TRIPOD items, with 44% (35–53%) for prognostic and 41% (34–48%) for diagnostic models. TRIPOD items that were completely reported for less than 25% of the models concerned abstract (2%), title (5%), blinding of predictor assessment (6%), comparison of development and validation data (11%), model updating (14%), model performance (14%), model specification (17%), characteristics of participants (21%), model performance measures (methods) (21%), and model-building procedures (24%). Most often reported were TRIPOD items regarding overall interpretation (96%), source of data (95%), and risk groups (90%). Conclusions More than half of the items considered essential for transparent reporting were not fully addressed in publications of multivariable prediction model studies. Essential information for using a model in individual risk prediction, i.e. model specifications and model performance, was incomplete for more than 80% of the models. Items that require improved reporting are title, abstract, and model-building procedures, as they are crucial for identification and external validation of prediction models.

Keywords