网络与信息安全学报 (Feb 2021)
Moving target defense against adversarial attacks
Abstract
Deep neural network has been successfully applied to image classification, but recent research work shows that deep neural network is vulnerable to adversarial attacks. A moving target defense method was proposed by means of dynamic switching model with a Bayes-Stackelberg game strategy, which could prevent an attacker from continuously obtaining consistent information and thus blocked its construction of adversarial examples. To improve the defense effect of the proposed method, the gradient consistency among the member models was taken as a measure to construct a new loss function in training for improving the difference among the member models. Experimental results show that the proposed method can improve the moving target defense performance of the image classification system and significantly reduce the attack success rate against the adversarial examples.
Keywords