Journal of Nature and Science of Medicine (Jul 2024)

ChatGPT as a Tool for Oral Health Education: A Systematic Evaluation of ChatGPT Responses to Patients’ Oral Health-related Queries

  • Gadde Praveen,
  • U. L. S. Poornima,
  • Anitha Akkaloori,
  • Vakalapudi Bharathi

DOI
https://doi.org/10.4103/jnsm.jnsm_208_23
Journal volume & issue
Vol. 7, no. 3
pp. 154 – 157

Abstract

Read online

Background: ChatGPT holds promise in oral health education, provided valid concerns are proactively examined and addressed. Hence, this study was conducted to evaluate ChatGPT responses to patients’ most common queries about their oral health. Methods: A cross-sectional study was conducted to gather a dataset of oral health-related queries from patients attending a dental institution. The dataset was preprocessed and formatted to remove any irrelevant or duplicate queries. Then, we supplied the dataset to ChatGPT to generate responses. We asked two dental public health experts to independently review the ChatGPT responses for clarity, accuracy, relevance, comprehensiveness, consistency, acceptance, and bias using a 5-point Likert scale. The intraclass correlation coefficient (ICC) was used to evaluate interrater reliability. Scores were summarized using descriptive statistics. Results: A total of 563 oral health-related queries were gathered from 120 patients. After removing the irrelevant or duplicate queries, 105 were included in the final dataset. The ICC value of 0.878 (95% confidence interval range from 0.841 to 0.910) showed good reliability between the reviewers. The majority of ChatGPT responses had a clear understanding (95.24%), were scientifically accurate and relevant to the query (87.62%), were comprehensive (83.81%), were consistent (84.76%), and were acceptable without any edits (86.67%). The reviewers strongly agreed that only 40.96% of the responses had no bias. The overall score was high with a mean value of 4.72 ± 0.30. The qualitative analysis of comments on ChatGPT responses revealed that the responses were rather long and more comprehensive. Conclusions: ChatGPT generated clear, scientifically accurate and relevant, comprehensive, and consistent responses to diverse oral health-related queries despite some significant limitations.

Keywords