Frontiers in High Performance Computing (Sep 2024)

Neural architecture search for adversarial robustness via learnable pruning

  • Yize Li,
  • Pu Zhao,
  • Ruyi Ding,
  • Tong Zhou,
  • Yunsi Fei,
  • Xiaolin Xu,
  • Xue Lin

DOI
https://doi.org/10.3389/fhpcp.2024.1301384
Journal volume & issue
Vol. 2

Abstract

Read online

The convincing performances of deep neural networks (DNNs) can be degraded tremendously under malicious samples, known as adversarial examples. Besides, with the widespread edge platforms, it is essential to reduce the DNN model size for efficient deployment on resource-limited edge devices. To achieve both adversarial robustness and model sparsity, we propose a robustness-aware search framework, an Adversarial Neural Architecture Search by the Pruning policy (ANAS-P). The layer-wise width is searched automatically via the binary convolutional mask, titled Depth-wise Differentiable Binary Convolutional indicator (D2BC). By conducting comprehensive experiments on three classification data sets (CIFAR-10, CIFAR-100, and Tiny-ImageNet) utilizing two adversarial losses TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization) and MART (Misclassification Aware adveRsarial Training), we empirically demonstrate the effectiveness of ANAS in terms of clean accuracy and adversarial robust accuracy across various sparsity levels. Our proposed approach, ANAS-P, outperforms previous representative methods, especially in high-sparsity settings, with significant improvements.

Keywords