IEEE Access (Jan 2024)

A Multimodal Driver Anger Recognition Method Based on Context-Awareness

  • Tongqiang Ding,
  • Kexin Zhang,
  • Shuai Gao,
  • Xinning Miao,
  • Jianfeng Xi

DOI
https://doi.org/10.1109/ACCESS.2024.3422383
Journal volume & issue
Vol. 12
pp. 118533 – 118550

Abstract

Read online

In today’s society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This study proposes a context-aware multi-modal driver anger emotion recognition method (CA-MDER) to address the main issues encountered in multi-modal emotion recognition tasks. These include individual differences among drivers, variability in emotional expression across different driving scenarios, and the inability to capture driving behavior information that represents vehicle-to-vehicle interaction. The method employs Attention Mechanism-Depthwise Separable Convolutional Neural Networks (AM-DSCNN), an improved Support Vector Machines (SVM), and Random Forest (RF) models to perform multi-modal anger emotion recognition using facial, vocal, and driving state information. It also uses Context-Aware Reinforcement Learning (CA-RL) based adaptive weight distribution for multi-modal decision-level fusion. The results show that the proposed method performs well in emotion classification metrics, with an accuracy and F1 score of 91.68% and 90.37%, respectively, demonstrating robust multi-modal emotion recognition performance and powerful emotion recognition capabilities.

Keywords