IEEE Access (Jan 2019)

DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder

  • Yassine Bakhti,
  • Sid Ahmed Fezza,
  • Wassim Hamidouche,
  • Olivier Deforges

DOI
https://doi.org/10.1109/ACCESS.2019.2951526
Journal volume & issue
Vol. 7
pp. 160397 – 160407

Abstract

Read online

Given their outstanding performance, the Deep Neural Networks (DNNs) models have been deployed in many real-world applications. However, recent studies have demonstrated that they are vulnerable to small carefully crafted perturbations, i.e., adversarial examples, which considerably decrease their performance and can lead to devastating consequences, especially for safety-critical applications, such as autonomous vehicles, healthcare and face recognition. Therefore, it is of paramount importance to offer defense solutions that increase the robustness of DNNs against adversarial attacks. In this paper, we propose a novel defense solution based on a Deep Denoising Sparse Autoencoder (DDSA). The proposed method is performed as a pre-processing step, where the adversarial noise of the input samples is removed before feeding the classifier. The pre-processing defense block can be associated with any classifier, without any change to their architecture or training procedure. In addition, the proposed method is a universal defense, since it does not require any knowledge about the attack, making it usable against any type of attack. The experimental results on MNIST and CIFAR-10 datasets have shown that the proposed DDSA defense provides a high robustness against a set of prominent attacks under white-, gray- and black-box settings, and outperforms state-of-the-art defense methods.

Keywords