IEEE Access (Jan 2024)
EM-UDA: Emotion Detection Using Unsupervised Domain Adaptation for Classification of Facial Images
Abstract
Facial expressions can be used to interpret human feelings. They can be successfully used to assess the mood of a person. Accurate prediction of moods can prove to be of immense help in several areas including the mental health of an individual. Most methods proposed for facial emotion recognition use supervised learning. Research in utilizing facial expressions to assess the mental health of an individual is decelerated by the lack of availability of annotated data. Some unsupervised studies in this area are multimodal- text, images and at times questionnaires. These questionnaires and text are often used to validate results obtained using images. The proposed work is an attempt to design an unsupervised adversarial domain adaptation-based model EM-UDA to automate recognition of emotions from facial images so as to handle the issue of scarcity of labeled data. The model classifies facial images as depicting negative emotions or positive emotions. The proposed EM-UDA model achieves an accuracy of 83.9% and an F1-Score of 82.8% when trained on AffectNet, CK+ and tested on CK+. It attains an accuracy of 74.55% and a F1-Score of 74.87% when trained on AffectNet, FER 2013 and tested on FER 2013.
Keywords