GIScience & Remote Sensing (Dec 2024)

U-SeqNet: learning spatiotemporal mapping relationships for multimodal multitemporal cloud removal

  • Qian Zhang,
  • Xiangnan Liu,
  • Tao Peng,
  • Xiao Yang,
  • Mengzhen Tang,
  • Xinyu Zou,
  • Meiling Liu,
  • Ling Wu,
  • Tingwei Zhang

DOI
https://doi.org/10.1080/15481603.2024.2330185
Journal volume & issue
Vol. 61, no. 1

Abstract

Read online

ABSTRACTOptical remotely sensed time series data have various key applications in Earth surface dynamics. However, cloud cover significantly hampers data analysis and interpretation. Despite synthetic aperture radar (SAR)-to-optical image translation techniques emerging as a promising solution, their effectiveness is diminished by their inability to adequately account for the intertwined nature of temporal and spatial dimensions. This study introduces U-SeqNet, an innovative model that integrates U-Net and Sequence-to-Sequence (Seq2Seq) architectures. Leveraging a pioneering spatiotemporal teacher forcing strategy, U-SeqNet excels in adapting and reconstructing data, capitalizing on available cloud-free observations to improve accuracy. Rigorous assessments through No Reference and Full Reference Image Quality Assessments (NR – IQA and FR – IQA) affirm U-SeqNet’s exceptional performance, marked by a Natural Image Quality Evaluator (NIQE) score of 5.85 and Mean Absolute Error (MAE) of 0.039. These results underline U-SeqNet’s exceptional capabilities in image reconstruction and its potential to improve remote sensing analysis by enabling more accurate and efficient multimodal and multitemporal cloud removal techniques.

Keywords