Applied Sciences (May 2024)

A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models

  • Monserrat Vázquez-Hernández,
  • Luis Alberto Morales-Rosales,
  • Ignacio Algredo-Badillo,
  • Sofía Isabel Fernández-Gregorio,
  • Héctor Rodríguez-Rangel,
  • María-Luisa Córdoba-Tlaxcalteco

DOI
https://doi.org/10.3390/app14114614
Journal volume & issue
Vol. 14, no. 11
p. 4614

Abstract

Read online

In recent years, the use of deep learning models for deploying sentiment analysis systems has become a widespread topic due to their processing capacity and superior results on large volumes of information. However, after several years’ research, previous works have demonstrated that deep learning models are vulnerable to strategically modified inputs called adversarial examples. Adversarial examples are generated by performing perturbations on data input that are imperceptible to humans but that can fool deep learning models’ understanding of the inputs and lead to false predictions being generated. In this work, we collect, select, summarize, discuss, and comprehensively analyze research works to generate textual adversarial examples. There are already a number of reviews in the existing literature concerning attacks on deep learning models for text applications; in contrast to previous works, however, we review works mainly oriented to sentiment analysis tasks. Further, we cover the related information concerning generation of adversarial examples to make this work self-contained. Finally, we draw on the reviewed literature to discuss adversarial example design in the context of sentiment analysis tasks.

Keywords