IEEE Access (Jan 2021)

Non-Learning Stereo-Aided Depth Completion Under Mis-Projection via Selective Stereo Matching

  • Yasuhiro Yao,
  • Ryoichi Ishikawa,
  • Shingo Ando,
  • Kana Kurata,
  • Naoki Ito,
  • Jun Shimamura,
  • Takeshi Oishi

DOI
https://doi.org/10.1109/ACCESS.2021.3117710
Journal volume & issue
Vol. 9
pp. 136674 – 136686

Abstract

Read online

We propose a non-learning depth completion method for a sparse depth map captured using a light detection and ranging (LiDAR) sensor guided by a pair of stereo images. Generally, conventional stereo-aided depth completion methods have two limiations. (i) they assume the given sparse depth map is accurately aligned to the input image, whereas the alignment is difficult to achieve in practice; (ii) they have limited accuracy in the long range because the depth is estimated by pixel disparity. To solve the abovementioned limitations, we propose selective stereo matching (SSM) that searches the most appropriate depth value for each image pixel from its neighborly projected LiDAR points based on an energy minimization framework. This depth selection approach can handle any type of mis-projection. Moreover, SSM has an advantage in terms of long-range depth accuracy because it directly uses the LiDAR measurement rather than the depth acquired from the stereo. SSM is a discrete process; thus, we apply variational smoothing with binary anisotropic diffusion tensor (B-ADT) to generate a continuous depth map while preserving depth discontinuity across object boundaries. Experimentally, compared with the previous state-of-the-art stereo-aided depth completion, the proposed method reduced the mean absolute error (MAE) of the depth estimation to 0.65 times and demonstrated approximately twice more accurate estimation in the long range. Moreover, under various LiDAR-camera calibration errors, the proposed method reduced the depth estimation MAE to 0.34-0.93 times from previous depth completion methods.

Keywords