IEEE Access (Jan 2020)
Cross-Domain Correspondence for Sketch-Based 3D Model Retrieval Using Convolutional Neural Network and Manifold Ranking
Abstract
Due to the huge difference in the representation of sketches and 3D models, sketch-based 3D model retrieval is a challenging problem in the areas of graphics and computer vision. Some state-of-the-art approaches usually extract features from 2D sketches and produce multiple projection views of 3D models, and then select one view of 3D models to match sketch. It's hard to find “the best view” and views from different perspectives of a 3D model may be completely different. Other methods apply learning features to retrieve 3D models based on 2D sketch. However, sketches are abstract images and are usually drawn subjectively. It is difficult to be learned accurately. To address these problems, we propose cross-domain correspondence method for sketch-based 3D model retrieval based on manifold ranking. Specifically, we first extract learning features of sketches and 3D models by two-parts CNN structures. Subsequently, we generate cross-domain undirected graphs using learning features and semantic labels to create correspondence between sketches and 3D models. Finally, the retrieval results are computed by manifold ranking. Experimental results on SHREC 13 and SHREC 14 datasets show the superior performance in all 7 standard metrics, compared to the state-of-the-art approaches.
Keywords