网络与信息安全学报 (Feb 2021)

Moving target defense against adversarial attacks

  • WANG Bin,
  • CHEN Liang, QIAN Yaguan, GUO Yankai, SHAO Qiqi, WANG Jiamin

DOI
https://doi.org/10.11959/j.issn.2096−109x.2021012
Journal volume & issue
Vol. 7, no. 1
pp. 113 – 120

Abstract

Read online

Deep neural network has been successfully applied to image classification, but recent research work shows that deep neural network is vulnerable to adversarial attacks. A moving target defense method was proposed by means of dynamic switching model with a Bayes-Stackelberg game strategy, which could prevent an attacker from continuously obtaining consistent information and thus blocked its construction of adversarial examples. To improve the defense effect of the proposed method, the gradient consistency among the member models was taken as a measure to construct a new loss function in training for improving the difference among the member models. Experimental results show that the proposed method can improve the moving target defense performance of the image classification system and significantly reduce the attack success rate against the adversarial examples.

Keywords