Sensors (Mar 2023)

Unsupervised Depth Completion Guided by Visual Inertial System and Confidence

  • Hanxuan Zhang,
  • Ju Huo

DOI
https://doi.org/10.3390/s23073430
Journal volume & issue
Vol. 23, no. 7
p. 3430

Abstract

Read online

This paper solves the problem of depth completion learning from sparse depth maps and RGB images. Specifically, a real-time unsupervised depth completion method in dynamic scenes guided by visual inertial system and confidence is described. The problems such as occlusion (dynamic scenes), limited computational resources and unlabeled training samples can be better solved in our method. The core of our method is a new compact network, which uses images, pose and confidence guidance to perform depth completion. Since visual-inertial information is considered as the only source of supervision, the loss function of confidence guidance is creatively designed. Especially for the problem of pixel mismatch caused by object motion and occlusion in dynamic scenes, we divide the images into static, dynamic and occluded regions, and design loss functions to match each region. Our experimental results in dynamic datasets and real dynamic scenes show that this regularization alone is sufficient to train depth completion models. Our depth completion network exceeds the accuracy achieved in prior work for unsupervised depth completion, and only requires a small number of parameters.

Keywords