Paladyn (Jun 2024)

A robot electronic device for multimodal emotional recognition of expressions

  • Nie Lulu

DOI
https://doi.org/10.1515/pjbr-2022-0127
Journal volume & issue
Vol. 15, no. 1
pp. pp. 94557 – 94572

Abstract

Read online

This study addresses the challenge of low recognition rates in emotion recognition systems, attributed to the vulnerability of sound data to ambient noise. To overcome this limitation, we propose a novel approach that leverages emotional information from diverse modalities. Our method integrates speech and facial expressions through advanced feature layer fusion and decision layer fusion strategies. Unlike traditional fusion algorithms, our proposed multimodal emotion recognition algorithm incorporates a dual fusion process at both the feature layer and the decision layer. This dual fusion not only preserves the distinctive characteristics of emotional information across modalities but also maintains inter-modal correlations. To evaluate the effectiveness of our approach, experiments were conducted using the eNTERFACE’05 multimodal emotion database. The results demonstrate a remarkable recognition accuracy of 89.3%, surpassing the highest recognition rate of 83.92% achieved by the current state-of-the-art kernel space feature fusion method. Our algorithm exhibits a significant improvement of 5.38% in recognition accuracy. By combining emotional data from speech and facial expressions using a data fusion methodology, our study demonstrates a significant improvement of 5.38% in recognition accuracy, contributing to the progress of multimodal emotion recognition systems.

Keywords