Virtual Reality & Intelligent Hardware (Oct 2022)

DSD-MatchingNet: Deformable Sparse-to-Dense Feature Matching for Learning Accurate Correspondences

  • Yicheng Zhao,
  • Han Zhang,
  • Ping Lu,
  • Ping Li,
  • EnHua Wu,
  • Bin Sheng

Journal volume & issue
Vol. 4, no. 5
pp. 432 – 443

Abstract

Read online

Background: Exploring the correspondences across multi-view images is the basis of many computer vision tasks. However, most existing methods are limited on accuracy under challenging conditions. In order to learn more robust and accurate correspondences, we propose the DSD-MatchingNet for local feature matching in this paper. First, we develop a deformable feature extraction module to obtain multi-level feature maps, which harvests contextual information from dynamic receptive fields. The dynamic receptive fields provided by deformable convolution network ensures our method to obtain dense and robust correspondences. Second, we utilize the sparse-to-dense matching with the symmetry of correspondence to implement accurate pixel-level matching, which enables our method to produce more accurate correspondences. Experiments have shown that our proposed DSD-MatchingNet achieves a better performance on image matching benchmark, as well as on visual localization benchmark. Specifically, our method achieves 91.3% mean matching accuracy on HPatches dataset and 99.3% visual localization recalls on Aachen Day-Night dataset.

Keywords