BJA Open (Jun 2024)

A comparative study of English and Japanese ChatGPT responses to anaesthesia-related medical questions

  • Kazuo Ando,
  • Masaki Sato,
  • Shin Wakatsuki,
  • Ryotaro Nagai,
  • Kumiko Chino,
  • Hinata Kai,
  • Tomomi Sasaki,
  • Rie Kato,
  • Teresa Phuongtram Nguyen,
  • Nan Guo,
  • Pervez Sultan

Journal volume & issue
Vol. 10
p. 100296

Abstract

Read online

Background: The expansion of artificial intelligence (AI) within large language models (LLMs) has the potential to streamline healthcare delivery. Despite the increased use of LLMs, disparities in their performance particularly in different languages, remain underexplored. This study examines the quality of ChatGPT responses in English and Japanese, specifically to questions related to anaesthesiology. Methods: Anaesthesiologists proficient in both languages were recruited as experts in this study. Ten frequently asked questions in anaesthesia were selected and translated for evaluation. Three non-sequential responses from ChatGPT were assessed for content quality (accuracy, comprehensiveness, and safety) and communication quality (understanding, empathy/tone, and ethics) by expert evaluators. Results: Eight anaesthesiologists evaluated English and Japanese LLM responses. The overall quality for all questions combined was higher in English compared with Japanese responses. Content and communication quality were significantly higher in English compared with Japanese LLMs responses (both P<0.001) in all three responses. Comprehensiveness, safety, and understanding were higher scores in English LLM responses. In all three responses, more than half of the evaluators marked overall English responses as better than Japanese responses. Conclusions: English LLM responses to anaesthesia-related frequently asked questions were superior in quality to Japanese responses when assessed by bilingual anaesthesia experts in this report. This study highlights the potential for language-related disparities in healthcare information and the need to improve the quality of AI responses in underrepresented languages. Future studies are needed to explore these disparities in other commonly spoken languages and to compare the performance of different LLMs.

Keywords