Complex & Intelligent Systems (Jul 2023)

Tiny adversarial multi-objective one-shot neural architecture search

  • Guoyang Xie,
  • Jinbao Wang,
  • Guo Yu,
  • Jiayi Lyu,
  • Feng Zheng,
  • Yaochu Jin

DOI
https://doi.org/10.1007/s40747-023-01139-8
Journal volume & issue
Vol. 9, no. 6
pp. 6117 – 6138

Abstract

Read online

Abstract The widely employed tiny neural networks (TNNs) in mobile devices are vulnerable to adversarial attacks. However, more advanced research on the robustness of TNNs is highly in demand. This work focuses on improving the robustness of TNNs without sacrificing the model’s accuracy. To find the optimal trade-off networks in terms of the adversarial accuracy, clean accuracy, and model size, we present TAM-NAS, a tiny adversarial multi-objective one-shot network architecture search method. First, we build a novel search space comprised of new tiny blocks and channels to establish a balance between the model size and adversarial performance. Then, we demonstrate how the supernet facilitates the acquisition of the optimal subnet under white-box adversarial attacks, provided that the supernet significantly impacts the subnet’s performance. Concretely, we investigate a new adversarial training paradigm by evaluating the adversarial transferability, the width of the supernet, and the distinction between training subnets from scratch and fine-tuning. Finally, we undertake statistical analysis for the layer-wise combination of specific blocks and channels on the first non-dominated front, which can be utilized as a design guideline for the design of TNNs.

Keywords