IEEE Access (Jan 2024)

How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses

  • Joana C. Costa,
  • Tiago Roxo,
  • Hugo Proenca,
  • Pedro Ricardo Morais Inacio

DOI
https://doi.org/10.1109/ACCESS.2024.3395118
Journal volume & issue
Vol. 12
pp. 61113 – 61136

Abstract

Read online

Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction, named adversarial examples, which raise concerns regarding the usage of DNNs in critical areas, such as Self-driving Vehicles, Malware Detection, and Healthcare. This paper compiles the most recent adversarial attacks in Object Recognition, grouped by the attacker capacity and knowledge, and modern defenses clustered by protection strategies, providing background details to understand the topic of adversarial attacks and defenses. The new advances regarding Vision Transformers are also presented, which have not been previously done in the literature, showing the resemblance and dissimilarity between this architecture and Convolutional Neural Networks. Furthermore, the most used datasets and metrics in adversarial settings are summarized, along with datasets requiring further evaluation, which is another contribution. This survey compares the state-of-the-art results under different attacks for multiple architectures and compiles all the adversarial attacks and defenses with available code, comprising significant contributions to the literature. Finally, practical applications are discussed, and open issues are identified, being a reference for future works.

Keywords