PLoS ONE (Jan 2020)

A systematic review of machine learning models for predicting outcomes of stroke with structured data.

  • Wenjuan Wang,
  • Martin Kiik,
  • Niels Peek,
  • Vasa Curcin,
  • Iain J Marshall,
  • Anthony G Rudd,
  • Yanzhong Wang,
  • Abdel Douiri,
  • Charles D Wolfe,
  • Benjamin Bray

DOI
https://doi.org/10.1371/journal.pone.0234722
Journal volume & issue
Vol. 15, no. 6
p. e0234722

Abstract

Read online

Background and purposeMachine learning (ML) has attracted much attention with the hope that it could make use of large, routinely collected datasets and deliver accurate personalised prognosis. The aim of this systematic review is to identify and critically appraise the reporting and developing of ML models for predicting outcomes after stroke.MethodsWe searched PubMed and Web of Science from 1990 to March 2019, using previously published search filters for stroke, ML, and prediction models. We focused on structured clinical data, excluding image and text analysis. This review was registered with PROSPERO (CRD42019127154).ResultsEighteen studies were eligible for inclusion. Most studies reported less than half of the terms in the reporting quality checklist. The most frequently predicted stroke outcomes were mortality (7 studies) and functional outcome (5 studies). The most commonly used ML methods were random forests (9 studies), support vector machines (8 studies), decision trees (6 studies), and neural networks (6 studies). The median sample size was 475 (range 70-3184), with a median of 22 predictors (range 4-152) considered. All studies evaluated discrimination with thirteen using area under the ROC curve whilst calibration was assessed in three. Two studies performed external validation. None described the final model sufficiently well to reproduce it.ConclusionsThe use of ML for predicting stroke outcomes is increasing. However, few met basic reporting standards for clinical prediction tools and none made their models available in a way which could be used or evaluated. Major improvements in ML study conduct and reporting are needed before it can meaningfully be considered for practice.