Journal of Medical Internet Research (Aug 2024)

ChatGPT and Google Assistant as a Source of Patient Education for Patients With Amblyopia: Content Analysis

  • Gloria Wu,
  • David A Lee,
  • Weichen Zhao,
  • Adrial Wong,
  • Rohan Jhangiani,
  • Sri Kurniawan

DOI
https://doi.org/10.2196/52401
Journal volume & issue
Vol. 26
p. e52401

Abstract

Read online

BackgroundWe queried ChatGPT (OpenAI) and Google Assistant about amblyopia and compared their answers with the keywords found on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website, specifically the section on amblyopia. Out of the 26 keywords chosen from the website, ChatGPT included 11 (42%) in its responses, while Google included 8 (31%). ObjectiveOur study investigated the adherence of ChatGPT-3.5 and Google Assistant to the guidelines of the AAPOS for patient education on amblyopia. MethodsChatGPT-3.5 was used. The four questions taken from the AAPOS website, specifically its glossary section for amblyopia, are as follows: (1) What is amblyopia? (2) What causes amblyopia? (3) How is amblyopia treated? (4) What happens if amblyopia is untreated? Approved and selected by ophthalmologists (GW and DL), the keywords from AAPOS were words or phrases that deemed significant for the education of patients with amblyopia. The “Flesch-Kincaid Grade Level” formula, approved by the US Department of Education, was used to evaluate the reading comprehension level for the responses from ChatGPT, Google Assistant, and AAPOS. ResultsIn their responses, ChatGPT did not mention the term “ophthalmologist,” whereas Google Assistant and AAPOS both mentioned the term once and twice, respectively. ChatGPT did, however, use the term “eye doctors” once. According to the Flesch-Kincaid test, the average reading level of AAPOS was 11.4 (SD 2.1; the lowest level) while that of Google was 13.1 (SD 4.8; the highest required reading level), also showing the greatest variation in grade level in its responses. ChatGPT’s answers, on average, scored 12.4 (SD 1.1) grade level. They were all similar in terms of difficulty level in reading. For the keywords, out of the 4 responses, ChatGPT used 42% (11/26) of the keywords, whereas Google Assistant used 31% (8/26). ConclusionsChatGPT trains on texts and phrases and generates new sentences, while Google Assistant automatically copies website links. As ophthalmologists, we should consider including “see an ophthalmologist” on our websites and journals. While ChatGPT is here to stay, we, as physicians, need to monitor its answers.