Virtual Reality & Intelligent Hardware (Oct 2022)

RADepthNet: Reflectance-Aware Monocular Depth Estimation

  • Chuxuan Li,
  • Ran Yi,
  • Saba Ghazanfar Ali,
  • Lizhuang Ma,
  • Enhua Wu,
  • Jihong Wang,
  • Lijuan Mao,
  • Bin Sheng

Journal volume & issue
Vol. 4, no. 5
pp. 418 – 431

Abstract

Read online

Background: Monocular depth estimation aims to predict the dense depth map from a single RGB image, which has important applications in 3D reconstruction, automatic driving, and augmented reality. However, existing methods directly feed the original RGB image into the model to extract depth features without avoiding the interference of depth-irrelevant information on depth estimation accuracy, which leads to inferior performance. To remove the influence of depth-irrelevant information and improve depth prediction accuracy, we propose RADepthNet, a novel reflectance-guided network fusing boundary features. Specifically, our method predicts depth maps using three steps: 1) Intrinsic Image Decomposition. We propose a Reflectance extraction module consisting of an encoder-decoder structure to extract depth-related reflectance. We demonstrate that the module can reduce the influence of illumination on depth estimation through an ablation study. 2) Boundary Detection. Boundary extraction module, consisting of an encoder, a refinement block, and an upsample block, is proposed to better predict depth at object boundaries utilizing gradient constraints. 3) Depth Prediction Module. Use a different encoder from 2) to obtain depth features from the reflectance map and fuse boundary features to predict depth. Besides, we proposed FIFADataset, a depth estimation dataset applied in soccer scenarios. Extensive experiments on the public dataset and our proposed FIFADataset show that our method achieves state-of-the-art performance.

Keywords