Jisuanji kexue yu tansuo (May 2022)

Dense Point Cloud Reconstruction by Shape and Pose Features Learning

  • YANG Yongzhao, ZHANG Yujin, ZHANG Lijun

DOI
https://doi.org/10.3778/j.issn.1673-9418.2010008
Journal volume & issue
Vol. 16, no. 5
pp. 1117 – 1127

Abstract

Read online

As one of the methods of high-resolution 3D reconstruction, generating dense 3D point clouds from a single image has always been of high interest in the field of computer vision. In view of most methods focusing on the single feature information of the target and the large amount of sample data used, the method of a multi-stage reconstruction of dense point cloud network based on feature diversity is proposed, which is composed of the first stage of the 3D reconstruction network and the second stage of the point cloud processing network. The 3D reconstruction network can reconstruct sparse point cloud from a single image based on the fusion of 2D image target shape features and 3D point cloud pose features. The second-stage point cloud processing network extracts global and local features based on sparse point clouds, and increases the density of points by fusing global and local point features to obtain high-resolution dense point clouds. Deep learning fine-tuning technology is used to combine two networks to form an end-to-end network that can generate dense point clouds from a single image. The method in this paper is quantitatively and qualitatively analyzed through a large number of experiments on synthetic and real-world datasets. The results show that the average CD value of this method is 0.00698, and the EMD value is 2823.53. The result is better than some existing methods, and the point cloud reconstruction effect is better.

Keywords