IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2025)
Multifeature Alignment and Matching Network for SAR and Optical Image Registration
Abstract
Due to the modal disparities between synthetic aperture radar (SAR) and optical images, effectively extracting modality-shared structural features is crucial for achieving accurate registration results. Considering that point features have a limited ability to describe the common structural features between SAR and optical images, graph topology is introduced to extract edge features to derive modality-shared structural features for reliable registration. In this article, we propose a registration network for multifeature alignment and matching (MFAM-RegNet) between SAR and optical images, which includes a multifeature alignment module (MFAM) and a multifeature matching module (MFMM). First, we construct an MFAM to extract and align point and edge features to mine modality-shared structural features. In MFAM, point features are extracted by graph neural networks, and edge features are constructed by the feature similarity between two keypoints. Inspired by graph matching, we design linear and quadratic contrastive learning to mine the correspondence of point and edge features of intramodal and intermodal images. Second, speckle noise in SAR images inevitably leads to some noise labels, which decreases the accuracy and robustness of our supervised algorithm. Therefore, we design an MFMM to modify noise labels and use bidirectional matching for robust matching. According to the essential relationships of features mined by the momentum contrastive learning strategy, the labels are adaptively modified to reduce the influence of the incorrect labels on the model's performance and achieve more stable matching results. Experiments on three publicly available SAR and optical datasets indicate that our proposed MFAM-RegNet outperforms the existing state-of-the-art algorithms.
Keywords