Remote Sensing (May 2022)

TE-SAGAN: An Improved Generative Adversarial Network for Remote Sensing Super-Resolution Images

  • Yongyang Xu,
  • Wei Luo,
  • Anna Hu,
  • Zhong Xie,
  • Xuejing Xie,
  • Liufeng Tao

DOI
https://doi.org/10.3390/rs14102425
Journal volume & issue
Vol. 14, no. 10
p. 2425

Abstract

Read online

Resolution is a comprehensive reflection and evaluation index for the visual quality of remote sensing images. Super-resolution processing has been widely applied for extracting information from remote sensing images. Recently, deep learning methods have found increasing application in the super-resolution processing of remote sensing images. However, issues such as blurry object edges and existing artifacts persist. To overcome these issues, this study proposes an improved generative adversarial network with self-attention and texture enhancement (TE-SAGAN) for remote sensing super-resolution images. We first designed an improved generator based on the residual dense block with a self-attention mechanism and weight normalization. The generator gains the feature extraction capability and enhances the training model stability to improve edge contour and texture. Subsequently, a joint loss, which is a combination of L1-norm, perceptual, and texture losses, is designed to optimize the training process and remove artifacts. The L1-norm loss is designed to ensure the consistency of low-frequency pixels; perceptual loss is used to entrench medium- and high-frequency details; and texture loss provides the local features for the super-resolution process. The results of experiments using a publicly available dataset (UC Merced Land Use dataset) and our dataset show that the proposed TE-SAGAN yields clear edges and textures in the super-resolution reconstruction of remote sensing images.

Keywords