Sensors (Feb 2025)

A Multimodal Deep Learning Approach to Intraoperative Nociception Monitoring: Integrating Electroencephalogram, Photoplethysmography, and Electrocardiogram

  • Omar M. T. Abdel Deen,
  • Shou-Zen Fan,
  • Jiann-Shing Shieh

DOI
https://doi.org/10.3390/s25041150
Journal volume & issue
Vol. 25, no. 4
p. 1150

Abstract

Read online

Monitoring nociception under general anesthesia remains challenging due to the complexity of pain pathways and the limitations of single-parameter methods. In this study, we introduce a multimodal approach that integrates electroencephalogram (EEG), photoplethysmography (PPG), and electrocardiogram (ECG) signals to predict nociception. We collected data from patients undergoing general anesthesia at two hospitals and developed and compared two deep learning models: a Multilayer Perceptron (MLP) and a Long Short-Term Memory (LSTM) network. Both models were trained on expert anesthesiologists’ assessments of nociception. We evaluated normalization strategies for offline and online usage and found that Min–Max normalization was most effective for our dataset. Our results demonstrate that the MLP model accurately captured nociceptive changes in response to painful surgical stimuli, whereas the LSTM model provided smoother predictions but with lower sensitivity to rapid changes. These findings underscore the potential of multimodal, deep learning-based solutions to improve real-time nociception monitoring in diverse clinical settings.

Keywords