Bioengineering (Jan 2024)

ChatGPT in Occupational Medicine: A Comparative Study with Human Experts

  • Martina Padovan,
  • Bianca Cosci,
  • Armando Petillo,
  • Gianluca Nerli,
  • Francesco Porciatti,
  • Sergio Scarinci,
  • Francesco Carlucci,
  • Letizia Dell’Amico,
  • Niccolò Meliani,
  • Gabriele Necciari,
  • Vincenzo Carmelo Lucisano,
  • Riccardo Marino,
  • Rudy Foddis,
  • Alessandro Palla

DOI
https://doi.org/10.3390/bioengineering11010057
Journal volume & issue
Vol. 11, no. 1
p. 57

Abstract

Read online

The objective of this study is to evaluate ChatGPT’s accuracy and reliability in answering complex medical questions related to occupational health and explore the implications and limitations of AI in occupational health medicine. The study also provides recommendations for future research in this area and informs decision-makers about AI’s impact on healthcare. A group of physicians was enlisted to create a dataset of questions and answers on Italian occupational medicine legislation. The physicians were divided into two teams, and each team member was assigned a different subject area. ChatGPT was used to generate answers for each question, with/without legislative context. The two teams then evaluated human and AI-generated answers blind, with each group reviewing the other group’s work. Occupational physicians outperformed ChatGPT in generating accurate questions on a 5-point Likert score, while the answers provided by ChatGPT with access to legislative texts were comparable to those of professional doctors. Still, we found that users tend to prefer answers generated by humans, indicating that while ChatGPT is useful, users still value the opinions of occupational medicine professionals.

Keywords