Sensors (May 2023)

Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface

  • Tatsuya Oyama,
  • Shunsuke Okura,
  • Kota Yoshida,
  • Takeshi Fujino

DOI
https://doi.org/10.3390/s23104742
Journal volume & issue
Vol. 23, no. 10
p. 4742

Abstract

Read online

A backdoor attack is a type of attack method that induces deep neural network (DNN) misclassification. The adversary who aims to trigger the backdoor attack inputs the image with a specific pattern (the adversarial mark) into the DNN model (backdoor model). In general, the adversary mark is created on the physical object input to an image by capturing a photo. With this conventional method, the success of the backdoor attack is not stable because the size and position change depending on the shooting environment. So far, we have proposed a method of creating an adversarial mark for triggering backdoor attacks by means of a fault injection attack on the mobile industry processor interface (MIPI), which is the image sensor interface. We propose the image tampering model, with which the adversarial mark can be generated in the actual fault injection to create the adversarial mark pattern. Then, the backdoor model was trained with poison data images, which the proposed simulation model created. We conducted a backdoor attack experiment using a backdoor model trained on a dataset containing 5% poison data. The clean data accuracy in normal operation was 91%; nevertheless, the attack success rate with fault injection was 83%.

Keywords