IEEE Access (Jan 2023)

To Drop or to Select: Reduce the Negative Effects of Disturbance Features for Point Cloud Classification From an Interpretable Perspective

  • Ao Liang,
  • Hao Zhang,
  • Haiyang Hua,
  • Wenyu Chen

DOI
https://doi.org/10.1109/ACCESS.2023.3266340
Journal volume & issue
Vol. 11
pp. 36184 – 36202

Abstract

Read online

The perturbation features limit the performance of point cloud classification models, both for clean point clouds and those obtained from the real world. In this paper, we propose two methods to enhance models by reducing the negative impact of the nuisance features from the perspective of interpretability, respectively, dropping the nuisance points before inputting the point cloud into models and adaptively selecting the important features during the training process. The former is achieved by saliency analysis of models, the perturbation points are those with low contribution to models. For each sample, dropping the low contribution points based on the saliency scores is equivalent to filtering the perturbation features. We design a generic framework for generating saliency maps for various models and datasets, and obtain empirical values for the number of dropped points on each set of them. Then we apply the unsupervised dropping process to improve the robustness of models. The latter is achieved by adaptive downsampling, and we design a multi-stage learnable class-attention-based downsampling module to replace the commonly used Farthest Point Sampling (FPS). As the training progresses, the downsampling module tends to select the common features for each category, thus eliminating the nuisance features to improve the learning efficiency of the model. For dropping points (DP), we generate saliency maps for PointNet&++, DGCNN and PointMLP on ModelNet40 and ScanObjectNN, PointNet+DP reaches an overall accuracy (OA) of 92.5% and 72% on ModelNet40 and ScanObjectNN, surpassing the original model by 3.4% and 5.3%, the OA of PointNet++ with DP on ModelNet40 object classification can be raised from 91.8% to 93.7%. For adaptive feature selection (AFS), PointMLP-elite+AFS reaches an OA of 92.5% and a mean accuracy (mAcc) of 72% on ScanObjectNN, surpassing the original model by 0.8% and 1%. It reaches the level of PointMLP with 6.3% of the number of parameters. Considering the difficulty of the deployment of deep models, PointMLP-elite+AFS is the most cost-effective classification model known on ScanObjectNN.

Keywords