Remote Sensing (Mar 2022)

Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation

  • Yingcai Wan,
  • Qiankun Zhao,
  • Cheng Guo,
  • Chenlong Xu,
  • Lijing Fang

DOI
https://doi.org/10.3390/rs14051228
Journal volume & issue
Vol. 14, no. 5
p. 1228

Abstract

Read online

This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep visual-inertial odometry (DeepVIO) with depth estimation by using sparse depth and the pose from DeepVIO pipeline to align the scale of the depth prediction with the triangulated point cloud and reduce image reconstruction error. Specifically, we use the strengths of learning-based visual-inertial odometry (VIO) and depth estimation to build an end-to-end self-supervised learning architecture. We evaluated the new framework on the KITTI datasets and compared it to the previous techniques. We show that our approach improves results for ego-motion estimation and achieves comparable results for depth estimation, especially in the detail area.

Keywords