Tehnički Vjesnik (Jan 2023)

Defending Against Local Adversarial Attacks through Empirical Gradient Optimization

  • Boyang Sun,
  • Xiaoxuan Ma,
  • Hengyou Wang

DOI
https://doi.org/10.17559/TV-20230607000701
Journal volume & issue
Vol. 30, no. 6
pp. 1888 – 1898

Abstract

Read online

Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduced locally visible adversarial patch attack, which achieves a success rate exceeding 96%. These attacks pose significant challenges to DNN security. Various defense methods, such as adversarial training, robust attention modules, watermarking, and gradient smoothing, have been proposed to enhance empirical robustness against patch attacks. However, these methods often have limitations concerning patch location requirements, randomness, and their impact on recognition accuracy for clean images.To address these challenges, we propose a novel defense algorithm called Local Adversarial Attack Empirical Defense using Gradient Optimization (LAAGO). The algorithm incorporates a low-pass filter before noise suppression to effectively mitigate the interference of high-frequency noise on the classifier while preserving the low-frequency areas of the images. Additionally, it emphasizes the original target features by enhancing the image gradients. Extensive experimental results demonstrate that the proposed method improves defense performance by 3.69% for 80 × 80 noise patches (representing approximately 4% of the images), while experiencing only a negligible 0.3% accuracy drop on clean images. The LAAGO algorithm provides a robust defense mechanism against local adversarial attacks, overcoming the limitations of previous methods. Our approach leverages gradient optimization, noise suppression, and feature enhancement, resulting in significant improvements in defense performance while maintaining high accuracy for clean images. This work contributes to the advancement of defense strategies against emerging adversarial attacks, thereby enhancing the security and reliability of deep neural networks.

Keywords