IEEE Access (Jan 2021)

Use Procedural Noise to Achieve Backdoor Attack

  • Xuan Chen,
  • Yuena Ma,
  • Shiwei Lu

DOI
https://doi.org/10.1109/ACCESS.2021.3110239
Journal volume & issue
Vol. 9
pp. 127204 – 127216

Abstract

Read online

In recent years, more researchers pay their attention to the security of artificial intelligence. The backdoor attack is one of the threats and has a powerful, stealthy attack ability. There exist a growing trend towards the triggers is that become dynamic and global. In this paper, we propose a novel global backdoor trigger that is generated by procedural noise. Compared with most triggers, ours are much stealthy and straightforward to implement. In fact, there exist three types of procedural noise, and we evaluate the attack ability of triggers generated by them on the different classification datasets, including CIFAR-10, GTSRB, CelebA, and ImageNet12. The experiment results show that our attack approach can bypass most defense approaches, even for the inspections of humans. We only need poison 5%–10% training data, but the attack success rate(ASR) can reach over 99%. To test the robustness of the backdoor model against the corruption methods that in practice, we introduce 17 corruption methods and compute the accuracy, ASR of the backdoor model with them. The facts show that our backdoor model has strong robustness for most corruption methods, which means it can be applied in reality. Our code is available at https://github.com/928082786/pnoiseattack.

Keywords