BMC Research Notes (Jun 2022)

Is the future of peer review automated?

  • Robert Schulz,
  • Adrian Barnett,
  • René Bernard,
  • Nicholas J. L. Brown,
  • Jennifer A. Byrne,
  • Peter Eckmann,
  • Małgorzata A. Gazda,
  • Halil Kilicoglu,
  • Eric M. Prager,
  • Maia Salholz-Hillel,
  • Gerben ter Riet,
  • Timothy Vines,
  • Colby J. Vorland,
  • Han Zhuang,
  • Anita Bandrowski,
  • Tracey L. Weissgerber

DOI
https://doi.org/10.1186/s13104-022-06080-6
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 5

Abstract

Read online

Abstract The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors’ conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study’s conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.

Keywords