International Journal of Computational Intelligence Systems (Aug 2024)
Enhancing the Transferability of Adversarial Patch via Alternating Minimization
Abstract
Abstract Adversarial patches, a type of adversarial example, pose serious security threats to deep neural networks (DNNs) by inducing erroneous outputs. Existing gradient stabilization methods aim to stabilize the optimization direction of adversarial examples through accumulating gradient momentum to enhance attack transferability on black-box models. However, they are not fully effective for adversarial patches. The accumulated momentum during optimization often misaligns with the optimization direction, reducing their efficacy in black-box scenarios. We introduce an optimization method called Alternating Minimization for Adversarial Patch (AMAP). This method decomposes the original AP into multiple sub-patches and utilizes their update direction to stabilize the optimization of the original AP. Additionally, we propose an adaptive step size optimization method that accelerates convergence and boosts the attack performance. In face recognition tasks, AMAP outperforms baseline methods by a remarkable 5.21% and exceeds the second-best method by 1.4%. Furthermore, AMAP demonstrates practical feasibility in the physical domain, highlighting its potential for robust computer security testing applications.
Keywords