High-Confidence Computing (Mar 2024)

Adversarial robustness analysis of LiDAR-included models in autonomous driving

  • Bo Yang,
  • Zizhi Jin,
  • Yushi Cheng,
  • Xiaoyu Ji,
  • Wenyuan Xu

Journal volume & issue
Vol. 4, no. 1
p. 100203

Abstract

Read online

In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR–camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.

Keywords