Clinical Nutrition Open Science (Dec 2024)

Investigation of a chest radiograph-based deep learning model to identify an imaging biomarker for malnutrition in older adults

  • Ryo Sasaki,
  • Yasuhiko Nakao,
  • Fumihiro Mawatari,
  • Takahito Nishihara,
  • Masafumi Haraguchi,
  • Masanori Fukushima,
  • Ryu Sasaki,
  • Satoshi Miuma,
  • Hisamitsu Miyaaki,
  • Kazuhiko Nakao

Journal volume & issue
Vol. 58
pp. 240 – 251

Abstract

Read online

Summary: Background & Aims: In recent years, artificial intelligence (AI) models based on chest radiography have gained attention in various fields, including cardiac function, age estimation, and clinical assessment during hospitalization. The Global Leadership Initiative on Malnutrition (GLIM) criteria are widely used in malnutrition diagnosis. The objective of this study is to examine the predictive ability of chest radiographs using deep learning techniques in relation to hematological parameters that are already known to be associated with malnutrition, as well as malnutrition scores, including GLIM criteria. Methods: A total of 3701 older patients (age ≥65 years) were admitted to our hospital from January 2021 to January 2022, and after excluding those with missing data, participants (N=2862) were enrolled. Chest Radiographs, Basic information (height, weight, age, and sex), hematological parameters (albumin, hemoglobin, lymphocyte count, and C-reactive protein), malnutritional assessments: GLIM severity, Geriatric Nutritional Risk Index (GNRI), modified Controlling Nutritional status (mCONUT), and Subjective Global Assessment (SGA), and Nutritional Support Team (NST) intervention were extracted and utilized as training and validation data. We used chest radiographic image's matrix as explanatory variables for numerical (hematological parameters) or categorical (scoring) nutritional data as objective variables. A previously reported deep learning model helped construct a chest radiography-based prediction model from the training data. The predicted data were evaluated by computing the correlation coefficients and area under the curve (AUC). Results: As a numerical variables analysis, albumin and hemoglobin predictions were relatively accurate (R=0.71, 0.74). As a categorical malnutritional prediction, chest radiograph-based AI effectively aided nutritional decisions based on GNRI (AUC: 0.88, 0.83, 0.88, and 0.90), SGA (0.75, 0.71, and 0.88), mCONUT (0.84, 0.88, 0.90, and 0.88), and GLIM severity (0.88, 0.82, and 0.85) class indices. The Class activation maps (CAM) analysis identified variation in the X-ray sites for each malnutritional AI prediction, with some sites in agreement and others in disagreement. Conclusion: Deep learning-based chest radiographic AI has the potential to more accurately reflect malnutrition scoring than hematologic parameters. Furthermore, it can predict outcomes and assess malnutrition, including GLIM criteria. It is anticipated that AI will be integrated into the NST workflow in the future.

Keywords