IEEE Access (Jan 2018)

Depth Estimation of Video Sequences With Perceptual Losses

  • Anjie Wang,
  • Zhijun Fang,
  • Yongbin Gao,
  • Xiaoyan Jiang,
  • Siwei Ma

DOI
https://doi.org/10.1109/ACCESS.2018.2846546
Journal volume & issue
Vol. 6
pp. 30536 – 30546

Abstract

Read online

3-D vision plays an important role in intelligent perception of robot, while it requires extra 3-D sensors. Depth estimation from monocular videos provides an alternative mechanism to recover the 3-D information. In this paper, we propose an unsupervised learning framework that uses the perceptual loss for depth estimation. Depth and pose networks are first trained to estimate the depth and the camera motion of the video sequence, respectively. With the estimated depth and pose of the original frame, the adjacent frame can be reconstructed. The pixel-wise differences between the constructed frame and the original frame are used as per-pixel loss. Meanwhile, reconstructed views and original views can be used to extract advanced features from a pre-trained network to define and optimize perceptual loss functions to assess the quality of reconstructions. We combine the respective advantages of these two methods and present an approach of generating a depth map by training the feed-forward network with per-pixel loss function and perceptual loss function. The experimental results show that our method can significantly improve the estimation accuracy of depth map.

Keywords