Scientific Reports (Sep 2024)

Large language multimodal models for new-onset type 2 diabetes prediction using five-year cohort electronic health records

  • Jun-En Ding,
  • Phan Nguyen Minh Thao,
  • Wen-Chih Peng,
  • Jian-Zhe Wang,
  • Chun-Cheng Chug,
  • Min-Chen Hsieh,
  • Yun-Chien Tseng,
  • Ling Chen,
  • Dongsheng Luo,
  • Chenwei Wu,
  • Chi-Te Wang,
  • Chih-Ho Hsu,
  • Yi-Tui Chen,
  • Pei-Fu Chen,
  • Feng Liu,
  • Fang-Ming Hung

DOI
https://doi.org/10.1038/s41598-024-71020-2
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Type 2 diabetes mellitus (T2DM) is a prevalent health challenge faced by countries worldwide. In this study, we propose a novel large language multimodal models (LLMMs) framework incorporating multimodal data from clinical notes and laboratory results for diabetes risk prediction. We collected five years of electronic health records (EHRs) dating from 2017 to 2021 from a Taiwan hospital database. This dataset included 1,420,596 clinical notes, 387,392 laboratory results, and more than 1505 laboratory test items. Our method combined a text embedding encoder and multi-head attention layer to learn laboratory values, and utilized a deep neural network (DNN) module to merge blood features with chronic disease semantics into a latent space. In our experiments, we observed that integrating clinical notes with predictions based on textual laboratory values significantly enhanced the predictive capability of the unimodal model in the early detection of T2DM. Moreover, we achieved an area greater than 0.70 under the receiver operating characteristic curve (AUC) for new-onset T2DM prediction, demonstrating the effectiveness of leveraging textual laboratory data for training and inference in LLMs and improving the accuracy of new-onset diabetes prediction.