IEEE Access (Jan 2021)
Cascade Convolution Neural Network for Point Set Generation
Abstract
Automatic and efficient 3D object modeling has become critical in industrial applications. The advancement of deep convolutional neural networks (CNNs) has prompted researchers to use CNNs for learning 3D geometry information directly from images. However, the feature maps directly extracted by CNNs are more suitable for image processing tasks because they contain more deep texture information of the entire 2D image. Compared with this, 3D reconstruction tasks using CNNs demand geometric information about a specific object. The existing architecture mainly tries to infer the geometric structure through texture information, which leads to an uneven distribution of points in the generated point cloud object. To address this problem, we propose a cascade point set generation network (CPSGN) that deforms the predicted object while more effectively inferring the object’s 3D geometric information from the 2D image so that the distribution of the final object becomes more uniform and denser. The CPSGN consists of a point set generation part for producing a basic 3D object and a point deformation part for fine-tuning the basic 3D object. In addition, we designed a projection loss that optimizes the geometry of the model by measuring shape differences from multiple perspectives. Experimental results on different benchmark datasets indicate that the produced point-based model outperforms existing approaches.
Keywords