ISPRS International Journal of Geo-Information (Jun 2024)

Learning Effective Geometry Representation from Videos for Self-Supervised Monocular Depth Estimation

  • Hailiang Zhao,
  • Yongyi Kong,
  • Chonghao Zhang,
  • Haoji Zhang,
  • Jiansen Zhao

DOI
https://doi.org/10.3390/ijgi13060193
Journal volume & issue
Vol. 13, no. 6
p. 193

Abstract

Read online

Recent studies on self-supervised monocular depth estimation have achieved promising results, which are mainly based on the joint optimization of depth and pose estimation via high-level photometric loss. However, how to learn the latent and beneficial task-specific geometry representation from videos is still far from being explored. To tackle this issue, we propose two novel schemes to learn more effective representation from monocular videos: (i) an Inter-task Attention Model (IAM) to learn the geometric correlation representation between the depth and pose learning networks to make structure and motion information mutually beneficial; (ii) a Spatial-Temporal Memory Module (STMM) to exploit long-range geometric context representation among consecutive frames both spatially and temporally. Systematic ablation studies are conducted to demonstrate the effectiveness of each component. Evaluations on KITTI show that our method outperforms current state-of-the-art techniques.

Keywords