JMIR Research Protocols (Nov 2021)

A Deep Learning Approach to Refine the Identification of High-Quality Clinical Research Articles From the Biomedical Literature: Protocol for Algorithm Development and Validation

  • Wael Abdelkader,
  • Tamara Navarro,
  • Rick Parrish,
  • Chris Cotoi,
  • Federico Germini,
  • Lori-Ann Linkins,
  • Alfonso Iorio,
  • R Brian Haynes,
  • Sophia Ananiadou,
  • Lingyang Chu,
  • Cynthia Lokker

DOI
https://doi.org/10.2196/29398
Journal volume & issue
Vol. 10, no. 11
p. e29398

Abstract

Read online

BackgroundA barrier to practicing evidence-based medicine is the rapidly increasing body of biomedical literature. Use of method terms to limit the search can help reduce the burden of screening articles for clinical relevance; however, such terms are limited by their partial dependence on indexing terms and usually produce low precision, especially when high sensitivity is required. Machine learning has been applied to the identification of high-quality literature with the potential to achieve high precision without sacrificing sensitivity. The use of artificial intelligence has shown promise to improve the efficiency of identifying sound evidence. ObjectiveThe primary objective of this research is to derive and validate deep learning machine models using iterations of Bidirectional Encoder Representations from Transformers (BERT) to retrieve high-quality, high-relevance evidence for clinical consideration from the biomedical literature. MethodsUsing the HuggingFace Transformers library, we will experiment with variations of BERT models, including BERT, BioBERT, BlueBERT, and PubMedBERT, to determine which have the best performance in article identification based on quality criteria. Our experiments will utilize a large data set of over 150,000 PubMed citations from 2012 to 2020 that have been manually labeled based on their methodological rigor for clinical use. We will evaluate and report on the performance of the classifiers in categorizing articles based on their likelihood of meeting quality criteria. We will report fine-tuning hyperparameters for each model, as well as their performance metrics, including recall (sensitivity), specificity, precision, accuracy, F-score, the number of articles that need to be read before finding one that is positive (meets criteria), and classification probability scores. ResultsInitial model development is underway, with further development planned for early 2022. Performance testing is expected to star in February 2022. Results will be published in 2022. ConclusionsThe experiments will aim to improve the precision of retrieving high-quality articles by applying a machine learning classifier to PubMed searching. International Registered Report Identifier (IRRID)DERR1-10.2196/29398