Clinical Ophthalmology (Sep 2024)

Comparing the Ability of Google and ChatGPT to Accurately Respond to Oculoplastics-Related Patient Questions and Generate Customized Oculoplastics Patient Education Materials

  • Cohen SA,
  • Yadlapalli N,
  • Tijerina JD,
  • Alabiad CR,
  • Chang JR,
  • Kinde B,
  • Mahoney NR,
  • Roelofs KA,
  • Woodward JA,
  • Kossler AL

Journal volume & issue
Vol. Volume 18
pp. 2647 – 2655

Abstract

Read online

Samuel A Cohen,1 Nikhita Yadlapalli,2 Jonathan D Tijerina,3 Chrisfouad R Alabiad,3 Jessica R Chang,4 Benyam Kinde,5 Nicholas R Mahoney,6 Kelsey A Roelofs,1 Julie A Woodward,7 Andrea L Kossler5 1Department of Ophthalmology, Stein Eye Institute at University of California Los Angeles David Geffen School of Medicine, Los Angeles, CA, USA; 2Department of Ophthalmology, FIU Herbert Wertheim College of Medicine, Miami, FL, USA; 3Department of Ophthalmology, Bascom Palmer Eye Institute at University of Miami Miller School of Medicine, Miami, FL, USA; 4Department of Ophthalmology, USC Roski Eye Institute at University of Southern California Keck School of Medicine, Los Angeles, CA, USA; 5Department of Ophthalmology, Byers Eye Institute at Stanford University School of Medicine, Palo Alto, CA, USA; 6Department of Ophthalmology, Wilmer Eye Institute at Johns Hopkins University School of Medicine, Baltimore, MD, USA; 7Department of Ophthalmology, Duke Eye Center at Duke University School of Medicine, Durham, NC, USACorrespondence: Andrea L Kossler, Department of Ophthalmology, Byers Eye Institute at Stanford University School of Medicine, 2452 Watson Court, Palo Alto, CA, USA, Email [email protected]: To compare the accuracy and readability of responses to oculoplastics patient questions provided by Google and ChatGPT. Additionally, to assess the ability of ChatGPT to create customized patient education materials.Methods: We executed a Google search to identify the 3 most frequently asked patient questions (FAQs) related to 10 oculoplastics conditions. FAQs were entered into both the Google search engine and the ChatGPT tool and responses were recorded. Responses were graded for readability using five validated readability indices and for accuracy by six oculoplastics surgeons. ChatGPT was instructed to create patient education materials at various reading levels for 8 oculoplastics procedures. The accuracy and readability of ChatGPT-generated procedural explanations were assessed.Results: ChatGPT responses to patient FAQs were written at a significantly higher average grade level than Google responses (grade 15.6 vs 10.0, p < 0.001). ChatGPT responses (93% accuracy) were significantly more accurate (p < 0.001) than Google responses (78% accuracy) and were preferred by expert panelists (79%). ChatGPT accurately explained oculoplastics procedures at an above average reading level. When instructed to rewrite patient education materials at a lower reading level, grade level was reduced by approximately 4 (15.7 vs 11.7, respectively, p < 0.001) without sacrificing accuracy.Conclusion: ChatGPT has the potential to provide patients with accurate information regarding their oculoplastics conditions. ChatGPT may also be utilized by oculoplastic surgeons as an accurate tool to provide customizable patient education for patients with varying health literacy. A better understanding of oculoplastics conditions and procedures amongst patients can lead to informed eye care decisions.Keywords: oculoplastics, google, ChatGPT, readability, accuracy

Keywords