PLoS ONE (Jan 2019)

Variation in methods, results and reporting in electronic health record-based studies evaluating routine care in gout: A systematic review.

  • Samantha S R Crossfield,
  • Lana Yin Hui Lai,
  • Sarah R Kingsbury,
  • Paul Baxter,
  • Owen Johnson,
  • Philip G Conaghan,
  • Mar Pujades-Rodriguez

DOI
https://doi.org/10.1371/journal.pone.0224272
Journal volume & issue
Vol. 14, no. 10
p. e0224272

Abstract

Read online

OBJECTIVE:To perform a systematic review examining the variation in methods, results, reporting and risk of bias in electronic health record (EHR)-based studies evaluating management of a common musculoskeletal disease, gout. METHODS:Two reviewers systematically searched MEDLINE, Scopus, Web of Science, CINAHL, PubMed, EMBASE and Google Scholar for all EHR-based studies published by February 2019 investigating gout pharmacological treatment. Information was extracted on study design, eligibility criteria, definitions, medication usage, effectiveness and safety data, comprehensiveness of reporting (RECORD), and Cochrane risk of bias (registered PROSPERO CRD42017065195). RESULTS:We screened 5,603 titles/abstracts, 613 full-texts and selected 75 studies including 1.9M gout patients. Gout diagnosis was defined in 26 ways across the studies, most commonly using a single diagnostic code (n = 31, 41.3%). 48.4% did not specify a disease-free period before 'incident' diagnosis. Medication use was suboptimal and varied with disease definition while results regarding effectiveness and safety were broadly similar across studies despite variability in inclusion criteria. Comprehensiveness of reporting was variable, ranging from 73% (55/75) appropriately discussing the limitations of EHR data use, to 5% (4/75) reporting on key data cleaning steps. Risk of bias was generally low. CONCLUSION:The wide variation in case definitions and medication-related analysis among EHR-based studies has implications for reported medication use. This is amplified by variable reporting comprehensiveness and the limited consideration of EHR-relevant biases (e.g. data adequacy) in study assessment tools. We recommend accounting for these biases and performing a sensitivity analysis on case definitions, and suggest changes to assessment tools to foster this.