Discover Artificial Intelligence (Dec 2024)

Enhancing anemia detection through multimodal data fusion: a non-invasive approach using EHRs and conjunctiva images

  • Muhammad Ramzan,
  • Muhammad Usman Saeed,
  • Ghulam Ali

DOI
https://doi.org/10.1007/s44163-024-00196-3
Journal volume & issue
Vol. 4, no. 1
pp. 1 – 19

Abstract

Read online

Abstract Anemia detection using multimodal approaches leverages the integration of multiple data sources, such as imaging, clinical records, and hematological parameters, to improve diagnostic accuracy. Such methods can capture the complex interplay of factors contributing to anemia, providing a more comprehensive assessment than traditional single-modality techniques. In this research, a novel deep learning multi-modal feature fusion approach is proposed for the automated detection of anemia using EHRs (Electronic Health Records), and Conjunctiva image dataset. First, EHR records are preporcessed by selecting the most appropriate features using Random Forest. The features from the conjunctiva images are extracted using RCBAM (Reverse Convolution Block Attention Mechanism). After that, GRAD-Cam algorithm is applied to calculate the pixel percentages of all the features. The output of Random Forest and GRAD-Cam algorithms is concatenated to form a multimodal fusion. The important information from the concatenated features is selected with the help of a professional healthcare consultant. The different experiments are performed on textual and image datasets individually and after concatenating. The results show that the proposed model outperforms from state-of-the-art methods with an accuracy of 95%. Despite challenges such as class imbalance and computational demands, our findings reveal substantial clinical potential, offering a patient-friendly and accessible diagnostic solution.

Keywords