IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2025)

DED-SAM:Adapting Segment Anything Model 2 for Dual Encoder–Decoder Change Detection

  • Junlong Qiu,
  • Wei Liu,
  • Xin Zhang,
  • Erzhu Li,
  • Lianpeng Zhang,
  • Xing Li

DOI
https://doi.org/10.1109/JSTARS.2024.3490754
Journal volume & issue
Vol. 18
pp. 995 – 1006

Abstract

Read online

Change detection has become a crucial topic in the field of remote sensing deep learning due to its extensive application in earth observation. However, real remote sensing images often contain multiple land cover classes with significant intraclass variability and interclass similarity, limiting the performance of change detection in complex scenarios. To address this, we leverage the capabilities of vision foundation models by applying the segment anything model (SAM) to remote sensing change detection, and we name this method dual encoder–decoder SAM (DED-SAM). Specifically, we construct a DED framework, utilizing a small-scale change detection model in both branches to generate mixed prompts including image features, mask prompts, and box prompts. The SAM 2 model is used for fine-grained recognition of dual-temporal images, generating accurate, and stable feature boundaries, which are then used as constraints to generate the final change mask. To validate the effectiveness of DED-SAM across various application scenarios, we conduct quantitative experiments on three public datasets: Levir-CD, SYSU-CD, and CDD, testing its detection capabilities under single change categories, multiple change categories, and seasonal pseudochange interference. The results show that the proposed DED-SAM achieved state-of-the-art F1 scores and IoUs on these three datasets: LEVIR-CD (92.00%, 85.11%), SYSU-CD (84.15%, 72.01%), and CDD (97.72%, 95.47%).

Keywords