Multimodal Technologies and Interaction (Apr 2022)

Emotion Classification from Speech and Text in Videos Using a Multimodal Approach

  • Maria Chiara Caschera,
  • Patrizia Grifoni,
  • Fernando Ferri

DOI
https://doi.org/10.3390/mti6040028
Journal volume & issue
Vol. 6, no. 4
p. 28

Abstract

Read online

Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes a method for classifying the emotions expressed in multimodal data extracted from videos. The proposed method models multimodal data as a sequence of features extracted from facial expressions, speech, gestures, and text, using a linguistic approach. Each sequence of multimodal data is correctly associated with the emotion by a method that models each emotion using a hidden Markov model. The trained model is evaluated on samples of multimodal sentences associated with seven basic emotions. The experimental results demonstrate a good classification rate for emotions.

Keywords