PLoS ONE (Jan 2025)

ChatGPT-4o can serve as the second rater for data extraction in systematic reviews.

  • Mette Motzfeldt Jensen,
  • Mathias Brix Danielsen,
  • Johannes Riis,
  • Karoline Assifuah Kristjansen,
  • Stig Andersen,
  • Yoshiro Okubo,
  • Martin Grønbech Jørgensen

DOI
https://doi.org/10.1371/journal.pone.0313401
Journal volume & issue
Vol. 20, no. 1
p. e0313401

Abstract

Read online

BackgroundSystematic reviews provide clarity of a bulk of evidence and support the transfer of knowledge from clinical trials to guidelines. Yet, they are time-consuming. Artificial intelligence (AI), like ChatGPT-4o, may streamline processes of data extraction, but its efficacy requires validation.ObjectiveThis study aims to (1) evaluate the validity of ChatGPT-4o for data extraction compared to human reviewers, and (2) test the reproducibility of ChatGPT-4o's data extraction.MethodsWe conducted a comparative study using papers from an ongoing systematic review on exercise to reduce fall risk. Data extracted by ChatGPT-4o were compared to a reference standard: data extracted by two independent human reviewers. The validity was assessed by categorizing the extracted data into five categories ranging from completely correct to false data. Reproducibility was evaluated by comparing data extracted in two separate sessions using different ChatGPT-4o accounts.ResultsChatGPT-4o extracted a total of 484 data points across 11 papers. The AI's data extraction was 92.4% accurate (95% CI: 89.5% to 94.5%) and produced false data in 5.2% of cases (95% CI: 3.4% to 7.4%). The reproducibility between the two sessions was high, with an overall agreement of 94.1%. Reproducibility decreased when information was not reported in the papers, with an agreement of 77.2%.ConclusionValidity and reproducibility of ChatGPT-4o was high for data extraction for systematic reviews. ChatGPT-4o was qualified as a second reviewer for systematic reviews and showed potential for future advancements when summarizing data.