IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

Shape-Adaptive Modality Independent Region Descriptor for Multimodal Remote Sensing Image Matching

  • Xuecong Liu,
  • Xichao Teng,
  • Yijie Bian,
  • Zhang Li,
  • Qifeng Yu

DOI
https://doi.org/10.1109/JSTARS.2024.3447219
Journal volume & issue
Vol. 17
pp. 18139 – 18155

Abstract

Read online

With the advancement of remote sensing technologies, multimodal image matching has become increasingly important. Multimodal remote sensing images exhibit significant differences in radiometric and geometric properties due to their distinct imaging principles and being contaminated by various types of noise. Therefore, multimodal image matching remains a challenging problem. This article proposes a novel descriptor called shape-adaptive modality independent region descriptor to achieve robust matching across modalities by leveraging local structural self-similarities within the images. The proposed method first models the noise of each modality and uses a local polynomial approximation–intersection of confidence interval to fit the local texture patterns. It then constructs the descriptor based on regional gradients extracted from local structural masks. Experimental results demonstrate the effectiveness of our approach in extracting structural features from multimodal images with good noise robustness, especially in regions with limited texture, such as mountainous and island areas. The proposed descriptor achieves high accuracy and robustness in multimodal image matching.

Keywords