PLoS ONE (Jan 2019)

Reporting preclinical anesthesia study (REPEAT): Evaluating the quality of reporting in the preclinical anesthesiology literature.

  • Dean A Fergusson,
  • Marc T Avey,
  • Carly C Barron,
  • Mathew Bocock,
  • Kristen E Biefer,
  • Sylvain Boet,
  • Stephane L Bourque,
  • Isidora Conic,
  • Kai Chen,
  • Yuan Yi Dong,
  • Grace M Fox,
  • Ronald B George,
  • Neil M Goldenberg,
  • Ferrante S Gragasin,
  • Prathiba Harsha,
  • Patrick J Hong,
  • Tyler E James,
  • Sarah M Larrigan,
  • Jenna L MacNeil,
  • Courtney A Manuel,
  • Sarah Maximos,
  • David Mazer,
  • Rohan Mittal,
  • Ryan McGinn,
  • Long H Nguyen,
  • Abhilasha Patel,
  • Philippe Richebé,
  • Tarit K Saha,
  • Benjamin E Steinberg,
  • Sonja D Sampson,
  • Duncan J Stewart,
  • Summer Syed,
  • Kimberly Vella,
  • Neil L Wesch,
  • Manoj M Lalu,
  • Canadian Perioperative Anesthesia Clinical Trials Group

DOI
https://doi.org/10.1371/journal.pone.0215221
Journal volume & issue
Vol. 14, no. 5
p. e0215221

Abstract

Read online

Poor reporting quality may contribute to irreproducibility of results and failed 'bench-to-bedside' translation. Consequently, guidelines have been developed to improve the complete and transparent reporting of in vivo preclinical studies. To examine the impact of such guidelines on core methodological and analytical reporting items in the preclinical anesthesiology literature, we sampled a cohort of studies. Preclinical in vivo studies published in Anesthesiology, Anesthesia & Analgesia, Anaesthesia, and the British Journal of Anaesthesia (2008-2009, 2014-2016) were identified. Data was extracted independently and in duplicate. Reporting completeness was assessed using the National Institutes of Health Principles and Guidelines for Reporting Preclinical Research. Risk ratios were used for comparative analyses. Of 7615 screened articles, 604 met our inclusion criteria and included experiments reporting on 52 490 animals. The most common topic of investigation was pain and analgesia (30%), rodents were most frequently used (77%), and studies were most commonly conducted in the United States (36%). Use of preclinical reporting guidelines was listed in 10% of applicable articles. A minority of studies fully reported on replicates (0.3%), randomization (10%), blinding (12%), sample-size estimation (3%), and inclusion/exclusion criteria (5%). Statistics were well reported (81%). Comparative analysis demonstrated few differences in reporting rigor between journals, including those that endorsed reporting guidelines. Principal items of study design were infrequently reported, with few differences between journals. Methods to improve implementation and adherence to community-based reporting guidelines may be necessary to increase transparent and consistent reporting in the preclinical anesthesiology literature.