BMC Medical Informatics and Decision Making (Jan 2025)
Enhancing doctor-patient communication using large language models for pathology report interpretation
Abstract
Abstract Background Large language models (LLMs) are increasingly utilized in healthcare settings. Postoperative pathology reports, which are essential for diagnosing and determining treatment strategies for surgical patients, frequently include complex data that can be challenging for patients to comprehend. This complexity can adversely affect the quality of communication between doctors and patients about their diagnosis and treatment options, potentially impacting patient outcomes such as understanding of their condition, treatment adherence, and overall satisfaction. Materials and methods This study analyzed text pathology reports from four hospitals between October and December 2023, focusing on malignant tumors. Using GPT-4, we developed templates for interpretive pathology reports (IPRs) to simplify medical terminology for non-professionals. We randomly selected 70 reports to generate these templates and evaluated the remaining 628 reports for consistency and readability. Patient understanding was measured using a custom-designed pathology report understanding level assessment scale, scored by volunteers with no medical background. The study also recorded doctor-patient communication time and patient comprehension levels before and after using IPRs. Results Among 698 pathology reports analyzed, the interpretation through LLMs significantly improved readability and patient understanding. The average communication time between doctors and patients decreased by over 70%, from 35 to 10 min (P < 0.001), with the use of IPRs. The study also found that patients scored higher on understanding levels when provided with AI-generated reports, from 5.23 points to 7.98 points (P < 0.001), with the use of IPRs. indicating an effective translation of complex medical information. Consistency between original pathology reports (OPRs) and IPRs was also evaluated, with results showing high levels of consistency across all assessed dimensions, achieving an average score of 4.95 out of 5. Conclusion This research demonstrates the efficacy of LLMs like GPT-4 in enhancing doctor-patient communication by translating pathology reports into more accessible language. While this study did not directly measure patient outcomes or satisfaction, it provides evidence that improved understanding and reduced communication time may positively influence patient engagement. These findings highlight the potential of AI to bridge gaps between medical professionals and the public in healthcare environments.
Keywords