PLoS ONE (Jan 2012)

From the trenches: a cross-sectional study applying the GRADE tool in systematic reviews of healthcare interventions.

  • Lisa Hartling,
  • Ricardo M Fernandes,
  • Jennifer Seida,
  • Ben Vandermeer,
  • Donna M Dryden

DOI
https://doi.org/10.1371/journal.pone.0034697
Journal volume & issue
Vol. 7, no. 4
p. e34697

Abstract

Read online

BackgroundGRADE was developed to address shortcomings of tools to rate the quality of a body of evidence. While much has been published about GRADE, there are few empirical and systematic evaluations.ObjectiveTo assess GRADE for systematic reviews (SRs) in terms of inter-rater agreement and identify areas of uncertainty.DesignCross-sectional, descriptive study.MethodsWe applied GRADE to three SRs (n = 48, 66, and 75 studies, respectively) with 29 comparisons and 12 outcomes overall. Two reviewers graded evidence independently for outcomes deemed clinically important a priori. Inter-rater reliability was assessed using kappas for four main domains (risk of bias, consistency, directness, and precision) and overall quality of evidence.ResultsFOR THE FIRST REVIEW, RELIABILITY WAS: κ = 0.41 for risk of bias; 0.84 consistency; 0.18 precision; and 0.44 overall quality. Kappa could not be calculated for directness as one rater assessed all items as direct; assessors agreed in 41% of cases. For the second review reliability was: 0.37 consistency and 0.19 precision. Kappa could not be assessed for other items; assessors agreed in 33% of cases for risk of bias; 100% directness; and 58% overall quality. For the third review, reliability was: 0.06 risk of bias; 0.79 consistency; 0.21 precision; and 0.18 overall quality. Assessors agreed in 100% of cases for directness. Precision created the most uncertainty due to difficulties in identifying "optimal" information size and "clinical decision threshold", as well as making assessments when there was no meta-analysis. The risk of bias domain created uncertainty, particularly for nonrandomized studies.ConclusionsAs researchers with varied levels of training and experience use GRADE, there is risk for variability in interpretation and application. This study shows variable agreement across the GRADE domains, reflecting areas where further guidance is required.