IEEE Access (Jan 2024)

Deep Sparse Depth Completion Using Multi-Scale Residuals and Channel Shuffle

  • Zhi Liu,
  • Cheolkon Jung

DOI
https://doi.org/10.1109/ACCESS.2024.3353048
Journal volume & issue
Vol. 12
pp. 18189 – 18197

Abstract

Read online

Depth completion aims to recover dense depth maps from sparse depth maps. Recent approaches have used additional modalities as guidance to improve depth completion performance. Image-guided depth completion uses scene information from color images, but it still produces inaccurate object boundaries. In this paper, we propose deep sparse depth completion using multi-scale residuals and channel shuffle, named ReCSNet. ReCSNet is a dual-branch network based on a U-shaped architecture. ReCSNet consists of one VIS-Semantic-Guided Branch (VSGB) and one Sparse Depth Guided Branch (SDGB) to get global color, edge information, and local accurate depth information. VSGB utilizes two encoders to extract features from the VIS-Semantic image pairs and the sparse depth maps, and employs a feature channel shuffle mechanism to blend the two sets of encoded features. The semi-dense depth map generated by VSGB is concatenated with the original sparse depth map and input into SDGB to predict the second semi-dense depth map. The confidence maps generated by the two branches are adaptively fused to generate the final depth map. Moreover, we incorporate multi-scale residuals obtained from the VIS image and concatenate them with the decoded features to further enhance the constraint on object boundaries. At the rear of the dual-branch network, we add a Repetitive Deformable Convolution Module (RDCM) to further refine the depth values in object edges. Experimental results show that ReCSNet achieves outstanding performance on the KITTI depth completion validation dataset with an improvement of 16mm in the root mean square error (RMSE) metric.

Keywords