IET Computer Vision (Apr 2021)

Point cloud classification by dynamic graph CNN with adaptive feature fusion

  • Rui Guo,
  • Yong Zhou,
  • Jiaqi Zhao,
  • Yiyun Man,
  • Minjie Liu,
  • Rui Yao,
  • Bing Liu

DOI
https://doi.org/10.1049/cvi2.12039
Journal volume & issue
Vol. 15, no. 3
pp. 235 – 244

Abstract

Read online

Abstract The deep neural network has made the most advanced breakthrough in almost all 2D image tasks, so we consider the application of deep learning in 3D images. Point cloud data, as the most basic and important form of representation of 3D images, can accurately and intuitively show the real world. The authors propose a new network based on feature fusion to improve the point cloud classification and segmentation tasks. Our network mainly consists of three parts: global feature extractor, local feature extractor and adaptive feature fusion module. A multi‐scale transformation network is devised to guarantee the invariance of the transformation of the global feature, and a residual block is introduced to alleviate the problem of gradient disappearance to enhance the global feature extractor. Based on the edge convolution and multi‐layer perceptron, a local feature extractor is constructed. Finally, an adaptive feature‐fusion module is proposed to complete the fusion of global features and local features. Extensive experiments on point cloud classification and segmentation tasks are carried out to verify the effectiveness of the proposed method. The classification accuracy of the ModelNet40 is 93.6%, which is 4.4% higher than that of the PointNet. Similarly, the segmentation accuracy on the ShapeNet is 85.6%, which is higher than other methods.

Keywords