Future Internet (Nov 2021)

Improving the Robustness of Model Compression by On-Manifold Adversarial Training

  • Junhyung Kwon,
  • Sangkyun Lee

DOI
https://doi.org/10.3390/fi13120300
Journal volume & issue
Vol. 13, no. 12
p. 300

Abstract

Read online

Despite the advance in deep learning technology, assuring the robustness of deep neural networks (DNNs) is challenging and necessary in safety-critical environments, including automobiles, IoT devices in smart factories, and medical devices, to name a few. Furthermore, recent developments allow us to compress DNNs to reduce the size and computational requirements of DNNs to fit them into small embedded devices. However, how robust a compressed DNN can be has not been well studied in addressing its relationship to other critical factors, such as prediction performance and model sizes. In particular, existing studies on robust model compression have been focused on the robustness against off-manifold adversarial perturbation, which does not explain how a DNN will behave against perturbations that follow the same probability distribution as the training data. This aspect is relevant for on-device AI models, which are more likely to experience perturbations due to noise from the regular data observation environment compared with off-manifold perturbations provided by an external attacker. Therefore, this paper investigates the robustness of compressed deep neural networks, focusing on the relationship between the model sizes and the prediction performance on noisy perturbations. Our experiment shows that on-manifold adversarial training can be effective in building robust classifiers, especially when the model compression rate is high.

Keywords