IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

Concatenated Deep-Learning Framework for Multitask Change Detection of Optical and SAR Images

  • Zhengshun Du,
  • Xinghua Li,
  • Jianhao Miao,
  • Yanyuan Huang,
  • Huanfeng Shen,
  • Liangpei Zhang

DOI
https://doi.org/10.1109/JSTARS.2023.3333959
Journal volume & issue
Vol. 17
pp. 719 – 731

Abstract

Read online

Optical and synthetic aperture radar (SAR) images provide complementary information to each other. However, the heterogeneity of same-ground objects brings a large difficulty to change detection (CD). Correspondingly, transformation-based methods are developed with two independent tasks of image translation and CD. Most methods only utilize deep learning for image translation, and the simple cluster and threshold segmentation leads to poor CD results. Recently, a deep translation-based CD network (DTCDN) was proposed to apply deep learning for image translation and CD to improve the results. However, DTCDN requires the sequential training of the two independent subnetwork structures with a high computational cost. Toward this end, a concatenated deep-learning framework, multitask change detection network (MTCDN), of optical and SAR images is proposed by integrating the CD network into a complete generative adversarial network. This framework contains two generators and discriminators for optical and SAR image domains. Multitask refers to the combination of image identification by discriminators and CD based on an improved UNet++. The generators are responsible for image translation to unify the two images into the same feature domain. In the training and prediction stages, an end-to-end framework is realized to reduce cost. The experimental results on four optical and SAR datasets prove the effectiveness and robustness of the proposed framework over eight baselines.

Keywords