IEEE Access (Jan 2022)

Evaluating Adversarial Robustness of Secret Key-Based Defenses

  • Ziad Tariq Muhammad Ali,
  • Ameer Mohammed,
  • Imtiaz Ahmad

DOI
https://doi.org/10.1109/ACCESS.2022.3162874
Journal volume & issue
Vol. 10
pp. 34872 – 34882

Abstract

Read online

The vulnerability of neural networks to adversarial attacks has inspired the proposal of many defenses. Key-based input transformation techniques are the recently proposed methods that make use of gradient obfuscation to improve the adversarial robustness of models. However, most gradient obfuscation techniques can be broken by adaptive attacks that consider the knowledge of the new defense; thus, defenses that rely on gradient obfuscation require a thorough evaluation to identify their effectiveness. Block-wise transformation and randomized diversification are the two recently proposed key-based defenses that claim adversarial robustness. In this study, we developed adaptive attacks and used preexisting attacks against key-based defenses to show that they are still vulnerable to adversarial attacks. Our experiments demonstrate that for a block-wise transformation defense on the CIFAR-10 dataset with the block size of 4, our work can reduce the accuracy of pixel-shuffling to 7.45%, bit-flipping to 4.20% and feistel-based encryption to 9.45%, as compared to previous work that claims high adversarial robustness. In addition to block-wise transformation, we reduced the accuracy of the randomized diversification method by 25.30% on CIFAR-10.

Keywords