Frontiers in Artificial Intelligence (Nov 2024)

Multimodal driver emotion recognition using motor activity and facial expressions

  • Carlos H. Espino-Salinas,
  • Huizilopoztli Luna-García,
  • José M. Celaya-Padilla,
  • Cristian Barría-Huidobro,
  • Nadia Karina Gamboa Rosales,
  • David Rondon,
  • Klinge Orlando Villalba-Condori

DOI
https://doi.org/10.3389/frai.2024.1467051
Journal volume & issue
Vol. 7

Abstract

Read online

Driving performance can be significantly impacted when a person experiences intense emotions behind the wheel. Research shows that emotions such as anger, sadness, agitation, and joy can increase the risk of traffic accidents. This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and driver behavior, which are generated by interactions with basic driving elements, along with facial geometry images captured during emotion induction. The research applies machine learning to identify the most relevant motor activity signals for emotion recognition. Furthermore, a pre-trained Convolutional Neural Network (CNN) model is employed to extract probability vectors from images corresponding to the four emotions under investigation. These data sources are integrated through a unidimensional network for emotion classification. The main proposal of this research was to develop a multimodal intelligent model that combines motor activity signals and facial geometry images to accurately recognize four specific emotions (anger, sadness, agitation, and joy) in drivers, achieving a 96.0% accuracy in a simulated environment. The study confirmed a significant relationship between drivers' motor activity, behavior, facial geometry, and the induced emotions.

Keywords