Sensors (Feb 2022)

Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints

  • Baigan Zhao,
  • Yingping Huang,
  • Wenyan Ci,
  • Xing Hu

DOI
https://doi.org/10.3390/s22041383
Journal volume & issue
Vol. 22, no. 4
p. 1383

Abstract

Read online

This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion estimation from monocular video. The framework exploits the optical flow (OF) property to jointly train the depth and the ego-motion models. Unlike the existing unsupervised methods, our method extracts the features from the optical flow rather than from the raw RGB images, thereby enhancing unsupervised learning. In addition, we exploit the forward-backward consistency check of the optical flow to generate a mask of the invalid region in the image, and accordingly, eliminate the outlier regions such as occlusion regions and moving objects for the learning. Furthermore, in addition to using view synthesis as a supervised signal, we impose additional loss functions, including optical flow consistency loss and depth consistency loss, as additional supervision signals on the valid image region to further enhance the training of the models. Substantial experiments on multiple benchmark datasets demonstrate that our method outperforms other unsupervised methods.

Keywords