Remote Sensing (May 2024)

Activated Sparsely Sub-Pixel Transformer for Remote Sensing Image Super-Resolution

  • Yongde Guo,
  • Chengying Gong,
  • Jun Yan

DOI
https://doi.org/10.3390/rs16111895
Journal volume & issue
Vol. 16, no. 11
p. 1895

Abstract

Read online

Transformers have recently achieved significant breakthroughs in various visual tasks. However, these methods often overlook the optimization of interactions between convolution and transformer blocks. Although the basic attention module strengthens the feature selection ability, it is still weak in generating superior quality output. In order to address this challenge, we propose the integration of sub-pixel space and the application of sparse coding theory in the calculation of self-attention. This approach aims to enhance the network’s generation capability, leading to the development of a sparse-activated sub-pixel transformer network (SSTNet). The experimental results show that compared with several state-of-the-art methods, our proposed network can obtain better generation results, improving the sharpness of object edges and the richness of detail texture information in super-resolution generated images.

Keywords