International Journal of Computational Intelligence Systems (Feb 2024)

Multi-class Facial Emotion Expression Identification Using DL-Based Feature Extraction with Classification Models

  • M. Anand,
  • S. Babu

DOI
https://doi.org/10.1007/s44196-024-00406-x
Journal volume & issue
Vol. 17, no. 1
pp. 1 – 17

Abstract

Read online

Abstract Facial expression detection from images and videos has recently gained attention due to the wide variety of applications it has found in the field of computer vision such as advanced driving assistance systems (ADAS), augmented and virtual reality (AR/VR), video retrieval, and security systems. Facial terms, body language, hand gestures, and eye contact have all been researched as a means of deciphering and understanding human emotions. Automated facial expression recognition (FER) is a significant visual recognition procedure because human emotions are a worldwide signal used in non-verbal communication. The six primary universal manifestations of emotion are characterized as happiness, sadness, anger, contempt, fear, and surprise. While the accuracy of deep learning (DL)-based approaches has improved significantly across many domains, automated FER remains a difficult undertaking, especially when it comes to real-world applications. In this research work, two publicly available datasets such as FER2013 and EMOTIC are considered for validation process. Initially, pre-processing includes histogram equalization, image normalization and face detection using Multi-task Cascaded Convolutional Network (MT-CNN) is used. Then, DL-based EfficinetNetB0 is used to extract the features of pre-processed images for further process. Finally, the Weighted Kernel Extreme Learning Machine (WKELM) is used for classification of emotions, where the kernel parameters are optimized by Red Fox Optimizer (RFO). From the experimental analysis, the proposed model achieved 95.82% of accuracy, 95.81% of F1-score and 95% of recall for the testing data.

Keywords