Measurement: Sensors (Apr 2023)

On-road driver facial expression emotion recognition with parallel multi-verse optimizer (PMVO) and optical flow reconstruction for partial occlusion in internet of things (IoT)

  • S.S. Sudha,
  • S.S. Suganya

Journal volume & issue
Vol. 26
p. 100711

Abstract

Read online

In recent years, many large-scale information systems in the Internet of Things (IoT) can be converted into interdependent sensor networks, such as smart cities, smart medical systems, and industrial Internet systems. The successful application of Facial Expression Recognition in the IoT will make our algorithms faster, more convenient, lower overall costs, providing better business practices, and enhance sustainability. Facial Expression Recognition (FER)is essential to effectively communicate between human and machines. Video facial expression detection is a crucial component for gauging the driver's mood for driver assistance system. The emotions of the driver have a significant role in dictating the behavior of the driver, according to several studies, which can lead to disastrous car crashes. However, criteria affecting the identification of driver emotions with the right kind of monitoring include changes in stance, lighting, and occlusions. Based on the restoration of the blurred facial region, the Driver Facial Expression Emotion Recognition (DFEER) system was developed to address these issues. One of the first things to do when dealing with a sequence of blurred faces is to calculate the optical fluxes between the frames. After that, an optical flow reconstruction using a trained PMVO is performed to fix the occlusion-induced damage (Parallel Multi-Verse Optimizer). The major solutions are randomly split into groups of occluded and non-occluded optical flows using a parallel technique, and the groups discuss their findings after a set number of iterations. After optical flows have been rebuilt, they are used immediately in the classification phase of expression prediction. In this study, Very Deep Convolution Networks (VGGNet) proposed a method for recognising human emotions. Both the CK+ (Cohn-Kanade database) and the KMU-FED (Keimyung University Facial Expression of Drivers) databases are used to carry out the assessments of the classification model's efficacy. Accuracy, recall, precision, and f-measure the effectiveness of the suggested strategy is then evaluated using the findings.

Keywords