EURASIP Journal on Information Security (Sep 2021)

Multitask adversarial attack with dispersion amplification

  • Pavlo Haleta,
  • Dmytro Likhomanov,
  • Oleksandra Sokol

DOI
https://doi.org/10.1186/s13635-021-00124-3
Journal volume & issue
Vol. 2021, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Recently, adversarial attacks have drawn the community’s attention as an effective tool to degrade the accuracy of neural networks. However, their actual usage in the world is limited. The main reason is that real-world machine learning systems, such as content filters or face detectors, often consist of multiple neural networks, each performing an individual task. To attack such a system, adversarial example has to pass through many distinct networks at once, which is the major challenge addressed by this paper. In this paper, we investigate multitask adversarial attacks as a threat for real-world machine learning solutions. We provide a novel black-box adversarial attack, which significantly outperforms the current state-of-the-art methods, such as Fast Gradient Sign Attack (FGSM) and Basic Iterative Method (BIM, also known as Iterative-FGSM) in the multitask setting.

Keywords