Intelligent Computing (Jan 2024)

A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition Models

  • Nicolas Facchinetti,
  • Federico Simonetta,
  • Stavros Ntalampiras

DOI
https://doi.org/10.34133/icomputing.0088
Journal volume & issue
Vol. 3

Abstract

Read online

Speech emotion recognition (SER) has been constantly gaining attention in recent years due to its potential applications in diverse fields and thanks to the possibilities offered by deep learning technologies. However, recent studies have shown that deep learning models can be vulnerable to adversarial attacks. In this paper, we systematically assess this problem by examining the impact of various adversarial white-box and black-box attacks on different languages and genders within the context of SER. We first propose a suitable methodology for audio data processing, feature extraction, and convolutional neural network long short-term memory (CNN-LSTM) architecture. The observed outcomes highlighted the considerable vulnerability of CNN-LSTM models to adversarial examples (AEs). In fact, all the considered adversarial attacks are able to considerably reduce the performance of the constructed models. Furthermore, when assessing the efficacy of the attacks, minor differences were noted between the languages analyzed as well as between male and female speech. In summary, this work contributes to the understanding of the robustness of CNN-LSTM models, particularly in SER scenarios, and the impact of AEs. Interestingly, our findings serve as a baseline for (a) developing more robust algorithms for SER, (b) designing more effective attacks, (c) investigating possible defenses, (d) improved understanding of the vocal differences between different languages and genders, and (e) overall enhancing our comprehension of the SER task.