IEEE Access (Jan 2022)

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

  • Ren Wang,
  • Tianqi Chen,
  • Philip Yao,
  • Sijia Liu,
  • Indika Rajapakse,
  • Alfred O. Hero

DOI
https://doi.org/10.1109/ACCESS.2022.3209243
Journal volume & issue
Vol. 10
pp. 103074 – 103088

Abstract

Read online

K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability. However, the robustness of kNN-based deep classification models has not been thoroughly explored and kNN attack strategies are underdeveloped. In this paper, we first propose an Adversarial Soft kNN (ASK) loss for developing more effective kNN-based deep neural network attack strategies and designing better defense methods against them. Our ASK loss provides a differentiable surrogate of the expected kNN classification error. It is also interpretable as it preserves the mutual information between the perturbed input and the in-class-reference data. We use the ASK loss to design a novel attack method called the ASK-Attack (ASK-Atk), which shows superior attack efficiency and accuracy degradation relative to previous kNN attacks on hidden layers. We then derive an ASK-Defense (ASK-Def) method that optimizes the worst-case ASK training loss. Experiments on CIFAR-10 (ImageNet) show that (i) ASK-Atk achieves $\geq 13\%$ ( $\geq 13\%$ ) improvement in attack success rate over previous kNN attacks, and (ii) ASK-Def outperforms the conventional adversarial training method by $\geq 6.9\%$ ( $\geq 3.5\%$ ) in terms of robustness improvement. Relevant codes are available at https://github.com/wangren09/ASK.

Keywords