PLoS ONE (Jan 2016)

Methods for Developing Evidence Reviews in Short Periods of Time: A Scoping Review.

  • Ahmed M Abou-Setta,
  • Maya Jeyaraman,
  • Abdelhamid Attia,
  • Hesham G Al-Inany,
  • Mauricio Ferri,
  • Mohammed T Ansari,
  • Chantelle M Garritty,
  • Kenneth Bond,
  • Susan L Norris

DOI
https://doi.org/10.1371/journal.pone.0165903
Journal volume & issue
Vol. 11, no. 12
p. e0165903

Abstract

Read online

INTRODUCTION:Rapid reviews (RR), using abbreviated systematic review (SR) methods, are becoming more popular among decision-makers. This World Health Organization commissioned study sought to summarize RR methods, identify differences, and highlight potential biases between RR and SR. METHODS:Review of RR methods (Key Question 1 [KQ1]), meta-epidemiologic studies comparing reliability/ validity of RR and SR methods (KQ2), and their potential associated biases (KQ3). We searched Medline, EMBASE, Cochrane Library, grey literature, and checked reference lists, used personal contacts, and crowdsourcing (e.g. email listservs). Selection and data extraction was conducted by one reviewer (KQ1) or two reviewers independently (KQ2-3). RESULTS:Across all KQs, we identified 42,743 citations through the literature searches. KQ1: RR methods from 29 organizations were reviewed. There was no consensus on which aspects of the SR process to abbreviate. KQ2: Studies comparing the conclusions of RR and SR (n = 9) found them to be generally similar. Where major differences were identified, it was attributed to the inclusion of evidence from different sources (e.g. searching different databases or including different study designs). KQ3: Potential biases introduced into the review process were well-identified although not necessarily supported by empirical evidence, and focused mainly on selective outcome reporting and publication biases. CONCLUSION:RR approaches are context and organization specific. Existing comparative evidence has found similar conclusions derived from RR and SR, but there is a lack of evidence comparing the potential of bias in both evidence synthesis approaches. Further research and decision aids are needed to help decision makers and reviewers balance the benefits of providing timely evidence with the potential for biased findings.