Brain Sciences (May 2024)

ChatGPT for Tinnitus Information and Support: Response Accuracy and Retest after Three and Six Months

  • W. Wiktor Jedrzejczak,
  • Piotr H. Skarzynski,
  • Danuta Raj-Koziak,
  • Milaine Dominici Sanfins,
  • Stavros Hatzopoulos,
  • Krzysztof Kochanek

DOI
https://doi.org/10.3390/brainsci14050465
Journal volume & issue
Vol. 14, no. 5
p. 465

Abstract

Read online

Testing of ChatGPT has recently been performed over a diverse range of topics. However, most of these assessments have been based on broad domains of knowledge. Here, we test ChatGPT’s knowledge of tinnitus, an important but specialized aspect of audiology and otolaryngology. Testing involved evaluating ChatGPT’s answers to a defined set of 10 questions on tinnitus. Furthermore, given the technology is advancing quickly, we re-evaluated the responses to the same 10 questions 3 and 6 months later. The accuracy of the responses was rated by 6 experts (the authors) using a Likert scale ranging from 1 to 5. Most of ChatGPT’s responses were rated as satisfactory or better. However, we did detect a few instances where the responses were not accurate and might be considered somewhat misleading. Over the first 3 months, the ratings generally improved, but there was no more significant improvement at 6 months. In our judgment, ChatGPT provided unexpectedly good responses, given that the questions were quite specific. Although no potentially harmful errors were identified, some mistakes could be seen as somewhat misleading. ChatGPT shows great potential if further developed by experts in specific areas, but for now, it is not yet ready for serious application.

Keywords