Journal of Intelligent and Connected Vehicles (Jun 2025)
Enhancing driver emotion recognition through deep ensemble classification
Abstract
This research addresses the challenging task of classifying drivers’ emotions to increase their awareness of their driving behaviors. It recognizes the common issue of driver emotions, which often leads to the neglect of poor driving practices. By automatically detecting and identifying these behaviors, drivers can proactively obtain valuable insights to reduce potential accidents. This study proposes a comprehensive facial recognition model for drivers that uses a unified architecture comprising a convolutional neural network (CNN), a recurrent neural network (RNN), and a multilayer perceptron (MLP) classification model. Initially, a faster region-based convolutional neural network (R-CNN) was employed for accurate and efficient facial detection of drivers in live and recorded videos. Features are extracted from three CNN models and merged via advanced techniques to create an ensemble classification model. Moreover, the improved Faster R-CNN feature learning module is replaced with a new convolutional neural network module, VGG16, which maximizes the precision and effectiveness of facial detection in our system. Significant accuracy results of 89.2%, 97.20%, 99.01%, 93.65%, and 98.61% are shown in evaluations of our suggested facial detection and facial expression recognition (DFER) datasets, including the EMOTIC, CK+, FERPLUS, AffectNet, and custom datasets. These datasets were meticulously acquired in a simulated environment, necessitating the creation of several custom datasets. This research highlights the potential of deep ensemble classification in improving driver emotion recognition, thereby contributing to enhanced road safety.
Keywords