IEEE Access (Jan 2024)

Indoor Scene Reconstruction From Monocular Video Combining Contextual and Geometric Priors

  • Mingyun Wen,
  • Xuanyu Sheng,
  • Kyungeun Cho

DOI
https://doi.org/10.1109/ACCESS.2024.3481250
Journal volume & issue
Vol. 12
pp. 153360 – 153369

Abstract

Read online

Recent advancements in three-dimensional (3D) indoor scene reconstruction from monocular videos using deep learning have gained considerable attention. However, existing methods remain insufficient compared to reconstructions using data obtained from 3D sensors. This is primarily because video data lacks explicit depth information. Depth inference from monocular videos is reliant on visual cues, such as texture, which can become ambiguous owing to lighting, reflections, and material properties. Most existing methods utilize convolutional neural networks (CNN) for feature extraction and integrate features from multiple viewpoints to generate 3D features. However, CNNs cannot capture effective features in areas with unclear visual cues owing to their limited perceptual fields in shallow layers. Thus, to overcome these issues, this study proposes a keyframe feature-generation module employing a pretrained vision transformer (ViT) that capitalize on their global perception to infer and synthesize features from areas with ambiguous visual cues. In addition, we employ a pretrained multi-view stereo network to generate the cost volume as a geometric feature. Moreover, the geometric features are further enhanced via the features extracted from a ViT. The effectiveness of the proposed approach is demonstrated on real-world datasets compared to existing methods.

Keywords