IEEE Access (Jan 2024)

DNN Robustness Enhancement Based on Adversarial Sample Prioritization

  • Long Zhang,
  • Jiangzhao Wu,
  • Siyuan Ma,
  • Jian Liu

DOI
https://doi.org/10.1109/ACCESS.2024.3439494
Journal volume & issue
Vol. 12
pp. 147860 – 147881

Abstract

Read online

Adversarial attacks pose significant threats to the robustness of deep neural networks (DNN), necessitating the development of effective defense mechanisms. This paper presents a novel model for enhancing DNN robustness based on adversarial density selection (ADS). The proposed model systematically identifies and prioritizes adversarial samples by analyzing their density and output confidence. Our method employs a Hard-Random case selection strategy, combining high-priority and random samples, guiding the selection of data subsets for retraining without additional labeling resources. Comprehensive experiments conducted on benchmark datasets such as CIFAR10, FASHION-MNIST, MNIST, and SVHN using classic neural network models (LeNet and ResNet) demonstrate that ADS significantly improves DNN robustness. Key metrics used for evaluation include Average Percentage of Fault Detected (APFD) and Area Under the Receiver Operating Characteristic Cur (AUC-n) values, where ADS achieves up to 28% improvement in robustness and enhances test case prioritization effectiveness by up to 25%. These results highlight ADS’s superior performance in fortifying DNNs against adversarial attacks while maintaining high testing efficiency.

Keywords