Scientific Reports (Mar 2024)

Dense monocular depth estimation for stereoscopic vision based on pyramid transformer and multi-scale feature fusion

  • Zhongyi Xia,
  • Tianzhao Wu,
  • Zhuoyan Wang,
  • Man Zhou,
  • Boqi Wu,
  • C. Y. Chan,
  • Ling Bing Kong

DOI
https://doi.org/10.1038/s41598-024-57908-z
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 19

Abstract

Read online

Abstract Stereoscopic display technology plays a significant role in industries, such as film, television and autonomous driving. The accuracy of depth estimation is crucial for achieving high-quality and realistic stereoscopic display effects. In addressing the inherent challenges of applying Transformers to depth estimation, the Stereoscopic Pyramid Transformer-Depth (SPT-Depth) is introduced. This method utilizes stepwise downsampling to acquire both shallow and deep semantic information, which are subsequently fused. The training process is divided into fine and coarse convergence stages, employing distinct training strategies and hyperparameters, resulting in a substantial reduction in both training and validation losses. In the training strategy, a shift and scale-invariant mean square error function is employed to compensate for the lack of translational invariance in the Transformers. Additionally, an edge-smoothing function is applied to reduce noise in the depth map, enhancing the model's robustness. The SPT-Depth achieves a global receptive field while effectively reducing time complexity. In comparison with the baseline method, with the New York University Depth V2 (NYU Depth V2) dataset, there is a 10% reduction in Absolute Relative Error (Abs Rel) and a 36% decrease in Root Mean Square Error (RMSE). When compared with the state-of-the-art methods, there is a 17% reduction in RMSE.

Keywords