Entropy (Jan 2023)

Cross-Corpus Speech Emotion Recognition Based on Multi-Task Learning and Subdomain Adaptation

  • Hongliang Fu,
  • Zhihao Zhuang,
  • Yang Wang,
  • Chen Huang,
  • Wenzhuo Duan

DOI
https://doi.org/10.3390/e25010124
Journal volume & issue
Vol. 25, no. 1
p. 124

Abstract

Read online

To solve the problem of feature distribution discrepancy in cross-corpus speech emotion recognition tasks, this paper proposed an emotion recognition model based on multi-task learning and subdomain adaptation, which alleviates the impact on emotion recognition. Existing methods have shortcomings in speech feature representation and cross-corpus feature distribution alignment. The proposed model uses a deep denoising auto-encoder as a shared feature extraction network for multi-task learning, and the fully connected layer and softmax layer are added before each recognition task as task-specific layers. Subsequently, the subdomain adaptation algorithm of emotion and gender features is added to the shared network to obtain the shared emotion features and gender features of the source domain and target domain, respectively. Multi-task learning effectively enhances the representation ability of features, a subdomain adaptive algorithm promotes the migrating ability of features and effectively alleviates the impact of feature distribution differences in emotional features. The average results of six cross-corpus speech emotion recognition experiments show that, compared with other models, the weighted average recall rate is increased by 1.89~10.07%, the experimental results verify the validity of the proposed model.

Keywords