IEEE Access (Jan 2023)
Coarse-to-Fine Stereo Matching Network Based on Multi-Scale Structural Information Filtrating
Abstract
Stereo vision measurement is widely applied in tasks such as autonomous driving and 3D scene reconstruction. Accurately obtaining the disparity of stereo images relies on effective stereo matching algorithms. Compared with the traditional algorithm, the stereo matching algorithm based on convolutional neural networks (CNNs) demonstrates higher accuracy. In this paper, we propose Cs-Net, a coarse-to-fine stereo matching framework that incorporates structural information filtering, aiming to obtain accurate disparity maps. The proposed framework specifically addresses the challenge of accurate disparity estimation, and improves stereo matching in ill-posed regions, such as texture-less and reflective surfaces. To effectively tackle this challenge, the proposed framework incorporates several key modules. First, a contextual attention feature extraction module is introduced, which plays a crucial role in obtaining context information for ill-posed region. Second, a structural attention weight generation module is designed to alleviate the stereo matching errors caused by lack of structural information, and the structure boundary generated by the proposed module is proved to be related to stereo matching errors. Furthermore, a two-stage cost aggregation module is used to regularize the initial cost volume and effectively aggregate the depth information to alleviate matching errors. In the ablation experiments studies, compared to baseline algorithm (GwcNet), Cs-Net can improve D3 and EPE metrics by 14.4% and 0.16 px on the KITTI2015 validation dataset, respectively. Additionally, in the reflective regions of the KITTI2012 benchmark, compared to baseline algorithm, the D3 and D5 metrics of Cs-Net reduced by 15.3% and 20.1%. Additionally, on the DriveStereo dataset, Cs-Net exhibited significant reductions in the D3 and EPE metrics compared to the baseline algorithm, achieving a decrease of 23.5% and 0.09 px, respectively.
Keywords