网络与信息安全学报 (Dec 2022)

Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation

  • Bin WANG, Simin LI, Yaguan QIAN, Jun ZHANG, Chaohao LI, Chenming ZHU, Hongfei ZHANG

DOI
https://doi.org/10.11959/j.issn.2096-109x.2022074
Journal volume & issue
Vol. 8, no. 6
pp. 102 – 109

Abstract

Read online

Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained model.Specially, the size of trained models generated by the adversarial training will significantly increase for defending against adversarial attacks.This imposes constraints on the usability of adversarial training, especially in a resource-constraint environment.Thus, how to reduce the model size while ensuring the robustness of the trained model is a challenge.To address the above issues, a lightweight defense mechanism was proposed against adversarial attacks, with adaptive pruning and robust distillation.A hierarchically adaptive pruning method was applied to the model generated by adversarial training in advance.Then the trained model was further compressed by a modified robust distillation method.Experimental results on CIFAR-10 and CIFAR-100 datasets showed that our hierarchically adaptive pruning method presented stronger robustness under various FLOP than the existing pruning methods.Moreover, the fusion of pruning and robust distillation presented higher robustness than the state-of-art robust distillation methods.Therefore, the experimental results prove that the proposed method can improve the usability of the adversarial training in the IoT edge computing environment.

Keywords