IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2025)
PS-GAN: A Novel Pseudo-Siamese Generative Adversarial Network for Multimodal Remote Sensing Image Change Detection
Abstract
In recent years, research on image translation for multimodal remote sensing imagery in change detection (CD) has demonstrated that converting images from different sensors into a common image domain can effectively address the incomparability issues arising from imaging differences, thereby enabling traditional CD models to extract change information from diverse data sources. However, most existing studies generally overlook global contextual information during the image conversion process, resulting in an inability to fully capture the overall semantic structure of the image. This may lead to semantic confusion of similar features, thereby affecting the accuracy of CD. To address this, we propose a pseudo-siamese generative adversarial network (PS-GAN) that simultaneously considers both local and global information. Unlike conventional GANs that focus solely on local features, PS-GAN utilizes a two-branch structure in the encoder phase to separately extract global and local information. These features are then effectively fused through a carefully designed adaptive multi-scale feature fusion module, ensuring that the translated images are texturally clear and structurally intact, thus reducing the occurrence of pseudo-changes in CD. In addition, we adopt U2Net+ as the CD model and develop a gated feature modulation mechanism and an enhanced squeeze–and-excitation module to reduce the interference of redundant features while enhancing the semantic representation of change features, further improving CD accuracy. Finally, we conducted comparative experiments on four real-world datasets against eleven state-of-the-art multimodal CD methods. The experimental results clearly demonstrate the superiori of the proposed method, achieving more accurate detection outcomes.
Keywords