Journal of Medical Internet Research (Dec 2023)

Comparisons of Quality, Correctness, and Similarity Between ChatGPT-Generated and Human-Written Abstracts for Basic Research: Cross-Sectional Study

  • Shu-Li Cheng,
  • Shih-Jen Tsai,
  • Ya-Mei Bai,
  • Chih-Hung Ko,
  • Chih-Wei Hsu,
  • Fu-Chi Yang,
  • Chia-Kuang Tsai,
  • Yu-Kang Tu,
  • Szu-Nian Yang,
  • Ping-Tao Tseng,
  • Tien-Wei Hsu,
  • Chih-Sung Liang,
  • Kuan-Pin Su

DOI
https://doi.org/10.2196/51229
Journal volume & issue
Vol. 25
p. e51229

Abstract

Read online

BackgroundChatGPT may act as a research assistant to help organize the direction of thinking and summarize research findings. However, few studies have examined the quality, similarity (abstracts being similar to the original one), and accuracy of the abstracts generated by ChatGPT when researchers provide full-text basic research papers. ObjectiveWe aimed to assess the applicability of an artificial intelligence (AI) model in generating abstracts for basic preclinical research. MethodsWe selected 30 basic research papers from Nature, Genome Biology, and Biological Psychiatry. Excluding abstracts, we inputted the full text into ChatPDF, an application of a language model based on ChatGPT, and we prompted it to generate abstracts with the same style as used in the original papers. A total of 8 experts were invited to evaluate the quality of these abstracts (based on a Likert scale of 0-10) and identify which abstracts were generated by ChatPDF, using a blind approach. These abstracts were also evaluated for their similarity to the original abstracts and the accuracy of the AI content. ResultsThe quality of ChatGPT-generated abstracts was lower than that of the actual abstracts (10-point Likert scale: mean 4.72, SD 2.09 vs mean 8.09, SD 1.03; P<.001). The difference in quality was significant in the unstructured format (mean difference –4.33; 95% CI –4.79 to –3.86; P<.001) but minimal in the 4-subheading structured format (mean difference –2.33; 95% CI –2.79 to –1.86). Among the 30 ChatGPT-generated abstracts, 3 showed wrong conclusions, and 10 were identified as AI content. The mean percentage of similarity between the original and the generated abstracts was not high (2.10%-4.40%). The blinded reviewers achieved a 93% (224/240) accuracy rate in guessing which abstracts were written using ChatGPT. ConclusionsUsing ChatGPT to generate a scientific abstract may not lead to issues of similarity when using real full texts written by humans. However, the quality of the ChatGPT-generated abstracts was suboptimal, and their accuracy was not 100%.