IET Computer Vision (Aug 2024)

Adversarial catoptric light: An effective, stealthy and robust physical‐world attack to DNNs

  • Chengyin Hu,
  • Weiwen Shi,
  • Ling Tian,
  • Wen Li

DOI
https://doi.org/10.1049/cvi2.12264
Journal volume & issue
Vol. 18, no. 5
pp. 557 – 573

Abstract

Read online

Abstract Recent studies have demonstrated that finely tuned deep neural networks (DNNs) are susceptible to adversarial attacks. Conventional physical attacks employ stickers as perturbations, achieving robust adversarial effects but compromising stealthiness. Recent innovations utilise light beams, such as lasers and projectors, for perturbation generation, allowing for stealthy physical attacks at the expense of robustness. In pursuit of implementing both stealthy and robust physical attacks, the authors present an adversarial catoptric light (AdvCL). This method leverages the natural phenomenon of catoptric light to generate perturbations that are both natural and stealthy. AdvCL first formalises the physical parameters of catoptric light and then optimises these parameters using a genetic algorithm to derive the most adversarial perturbation. Finally, the perturbations are deployed in the physical scene to execute stealthy and robust attacks. The proposed method is evaluated across three dimensions: effectiveness, stealthiness, and robustness. Quantitative results obtained in simulated environments demonstrate the efficacy of the proposed method, achieving an attack success rate of 83.5%, surpassing the baseline. The authors utilise common catoptric light as a perturbation to enhance the method's stealthiness, rendering physical samples more natural in appearance. Robustness is affirmed by successfully attacking advanced DNNs with a success rate exceeding 80% in all cases. Additionally, the authors discuss defence strategies against AdvCL and introduce some light‐based physical attacks.

Keywords