Mathematical Biosciences and Engineering (Jun 2023)
Attention distraction with gradient sharpening for multi-task adversarial attack
Abstract
The advancement of deep learning has resulted in significant improvements on various visual tasks. However, deep neural networks (DNNs) have been found to be vulnerable to well-designed adversarial examples, which can easily deceive DNNs by adding visually imperceptible perturbations to original clean data. Prior research on adversarial attack methods mainly focused on single-task settings, i.e., generating adversarial examples to fool networks with a specific task. However, real-world artificial intelligence systems often require solving multiple tasks simultaneously. In such multi-task situations, the single-task adversarial attacks will have poor attack performance on the unrelated tasks. To address this issue, the generation of multi-task adversarial examples should leverage the generalization knowledge among multiple tasks and reduce the impact of task-specific information during the generation process. In this study, we propose a multi-task adversarial attack method to generate adversarial examples from a multi-task learning network by applying attention distraction with gradient sharpening. Specifically, we first attack the attention heat maps, which contain more generalization information than feature representations, by distracting the attention on the attack regions. Additionally, we use gradient-based adversarial example-generating schemes and propose to sharpen the gradients so that the gradients with multi-task information rather than only task-specific information can make a greater impact. Experimental results on the NYUD-V2 and PASCAL datasets demonstrate that the proposed method can improve the generalization ability of adversarial examples among multiple tasks and achieve better attack performance.
Keywords