IEEE Access (Jan 2024)
A Fast and Efficient Adversarial Attack Based on Feasible Direction Method
Abstract
It has been shown that deep neural networks can be easily fooled by adversarial examples, which are samples delicately crafted by adding small imperceptible perturbations to the input. Most optimization-based white-box attacks generate adversarial examples by formulating a constrained optimization problem and then solving it using gradient-based iterative methods. During each iteration, the search point is projected back to satisfy the constraints if it falls out of the feasible region. However, this breaks the continuity of the search process, negatively impacting the attacking performance. Additionally, existing methods often employ an identical step size for all dimensions. In this work, to address these problems and improve the performance of constructing adversarial examples, we propose a novel white-box attack capable of generating $\ell _{\infty } $ adversarial examples based on the feasible direction method. In our attack, the step size of each dimension is carefully adjusted to preserve the feasibility and to accelerate the convergence. The efficiency of the proposed method is evaluated on three widely used datasets(MNIST, CIFAR10, and ImageNet). The experimental results show that our method generates adversarial examples at a higher success rate and a faster convergence rate than existing methods. To facilitate reproducing the experiment results presented, we make the source code of our implementation publicly accessible at https://github.com/sjysjy1/Feasible_Direction_Method_Attack.
Keywords