PLoS ONE (Jan 2023)

LPF-Defense: 3D adversarial defense based on frequency analysis.

  • Hanieh Naderi,
  • Kimia Noorbakhsh,
  • Arian Etemadi,
  • Shohreh Kasaei

DOI
https://doi.org/10.1371/journal.pone.0271388
Journal volume & issue
Vol. 18, no. 2
p. e0271388

Abstract

Read online

The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud's features. These misclassifications may be due to the network's overreliance on features with unnecessary information in training sets. As such, identifying the features used by deep classifiers and removing features with unnecessary information from the training data can improve network's robustness against adversarial attacks. In this paper, the LPF-Defense framework is proposed to discard this unnecessary information from the training data by suppressing the high-frequency content in the training phase. Our analysis shows that adversarial perturbations are found in the high-frequency contents of adversarial point clouds. Experiments showed that the proposed defense method achieves the state-of-the-art defense performance against six adversarial attacks on PointNet, PointNet++, and DGCNN models. The findings are practically supported by an expansive evaluation of synthetic (ModelNet40 and ShapeNet) and real datasets (ScanObjectNN). In particular, improvements are achieved with an average increase of classification accuracy by 3.8% on Drop100 attack and 4.26% on Drop200 attack compared to the state-of-the-art methods. The method also improves models' accuracy on the original dataset compared to other available methods. (To facilitate research in this area, an open-source implementation of the method and data is released at https://github.com/kimianoorbakhsh/LPF-Defense.).