IEEE Access (Jan 2024)

A Random Ensemble of Encrypted Vision Transformers for Adversarially Robust Defense

  • Ryota Iijima,
  • Sayaka Shiota,
  • Hitoshi Kiya

DOI
https://doi.org/10.1109/ACCESS.2024.3400958
Journal volume & issue
Vol. 12
pp. 69206 – 69216

Abstract

Read online

Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In previous studies, the use of models encrypted with a secret key was demonstrated to be robust against white-box attacks, but not against black-box ones. In this paper, we propose a novel method using the vision transformer (ViT) that is a random ensemble of encrypted models for enhancing robustness against both white-box and black-box attacks. In addition, a benchmark attack method, called AutoAttack, is applied to models to test adversarial robustness objectively. In experiments, the method was demonstrated to be robust against not only white-box attacks but also black-box ones in an image classification task on the CIFAR-10 and ImageNet datasets. The method was also compared with the state-of-the-art in a standardized benchmark for adversarial robustness, RobustBench, and it was verified to outperform conventional defenses in terms of clean accuracy and robust accuracy.

Keywords