IEEE Access (Jan 2021)

Multi-Head Self-Attention for 3D Point Cloud Classification

  • Xue-Yao Gao,
  • Yan-Zhao Wang,
  • Chun-Xiang Zhang,
  • Jia-Qi Lu

DOI
https://doi.org/10.1109/ACCESS.2021.3050488
Journal volume & issue
Vol. 9
pp. 18137 – 18147

Abstract

Read online

3D point cloud classification is a hot issue in recent years. 3D point cloud is different from regular data such as image and text. Disorder of point cloud makes two-dimensional (2D) convolution neural network (CNN) hard to be applied. When features are acquired from input data, it is important to extract global and local information effectively. In this paper, we propose a 3D model classification method based on multi-head self-attention mechanism which consumes sparse point clouds and learns robust latent representation of 3D point cloud. The framework is composed of self-attention layer, multilayer perceptrons (MLPs), fully connected (FC) layer, max-pooling layer and softmax layer. Feature vector of point includes spatial coordinates and shape descriptors, and they are encoded by self-attention layers to extract relationships among them. Outputs of attention layers are concatenated and put into MLPs to extract features. When they are transformed into the expected dimension by MLPs, max-pooling layer will be applied to get features in high level. Then, they are put into fully connected layer. Softmax layer is used to determine category of 3D model. The proposed method is applied to ModelNet40. Experimental results show that the proposed method is robust to rotation variance, position variance and point sparsity.

Keywords