IEEE Access (Jan 2019)

Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example

  • Hyun Kwon,
  • Hyunsoo Yoon,
  • Daeseon Choi

DOI
https://doi.org/10.1109/ACCESS.2019.2915971
Journal volume & issue
Vol. 7
pp. 60908 – 60919

Abstract

Read online

Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassification by a DNN by modulating the entire image. However, in some cases, a restricted adversarial example may be required in which only certain parts of the image are modified rather than the entire image and that results in misclassification by the DNN. For example, when the placement of a road sign has already been completed, an attack may be required that will change only a specific part of the sign, such as by placing a sticker on it, to cause misidentification of the entire image. As another example, an attack may be required that causes a DNN to misinterpret images according to a minimal modulation of the outside border of the image. In this paper, we propose a new restricted adversarial example that modifies only a restricted area to cause misclassification by a DNN while minimizing distortion from the original sample. It can also select the size of the restricted area. We used the CIFAR10 and ImageNet datasets to evaluate the performance. We measured the attack success rate and distortion of the restricted adversarial example while adjusting the size, shape, and position of the restricted area. The results show that the proposed scheme generates restricted adversarial examples with a 100% attack success rate in a restricted area of the whole image (approximately 14% for CIFAR10 and 1.07% for ImageNet) while minimizing the distortion distance.

Keywords