IEEE Access (Jan 2020)

MVCLN: Multi-View Convolutional LSTM Network for Cross-Media 3D Shape Recognition

  • Qi Liang,
  • Yixin Wang,
  • Weizhi Nie,
  • Qiang Li

DOI
https://doi.org/10.1109/ACCESS.2020.3012692
Journal volume & issue
Vol. 8
pp. 139792 – 139802

Abstract

Read online

Cross-media 3D model recognition is an important and challenging task in computer vision, which can be utilized in many applications such as landmark detection, image set classification, etc. In recent years, with the development of deep learning, many approaches have been proposed to handle the 3D model recognition problem. However, all of these methods focus on the structure information representation and the multi-view information fusion, and ignore the spatial and temporal information. So that it is not suitable for the cross-media 3D model recognition. In this paper, we utilize the sequence views to represent each 3D model and propose a novel Multi-view Convolutional LSTM Network (MVCLN), which utilizes the LSTM structure to extract temporal information and applies the convolutional operation to extract spatial information. More especially, the spatial and temporal information both are considered during the training process, which can effectively utilize the differences between the view's spatial information to improve the final performance. Meanwhile, we also introduce the classic attention model to define the weight of each view, which can reduce the redundant information of view's spatial information in the information fusion step. We evaluate the proposed method on the ModelNet40 for 3D model classification and retrieval task. We also construct a dataset utilizing the overlap categories of MV-RED, ShapenetCore and ModelNet to demonstrate the effectiveness of our approach for the cross-media 3D model recognition. Experimental results and comparisons with the state-of-the-art methods demonstrate that our framework can achieve superior performance.

Keywords