IEEE Access (Jan 2020)

Towards Adversarial Robustness via Feature Matching

  • Zhuorong Li,
  • Chao Feng,
  • Jianwei Zheng,
  • Minghui Wu,
  • Hongchuan Yu

DOI
https://doi.org/10.1109/ACCESS.2020.2993304
Journal volume & issue
Vol. 8
pp. 88594 – 88603

Abstract

Read online

Image classification systems are known to be vulnerable to adversarial attacks, which are imperceptibly perturbed but lead to spectacularly disgraceful classification. Adversarial training is one of the most effective defenses for improving the robustness of classifiers. We introduce an enhanced adversarial training approach in this work. Motivated by human's consistently accurate perception of surroundings, we explore the artificial attention of deep neural networks in the context of adversarial classification. We begin with an empirical analysis of how the attention of artificial systems will change as the model undergoes adversarial attacks. Observation is that the class-specific attention gets diverted and subsequently induces wrong prediction. To that end, we propose a regularizer encouraging the consistency in the artificial attention on the clean image and its adversarial counterpart. Our method shows improved empirical robustness over the state-of-the-art, secures 55.74% adversarial accuracy on CIFAR-10 with perturbation budget of 8/255 under the challenging untargeted attack in white-box settings. Further evaluations on CIFAR-100 also show our potential for a desirable boost in adversarial robustness for deep neural networks. Code and trained models of our work are available at: https://github.com/lizhuorong/Towards-Adversarial-Robustness-via-Feature-matching.

Keywords