Advances in Multimedia (Jan 2024)

Image Super-Resolution Reconstruction Based on the Lightweight Hybrid Attention Network

  • Chu Yuezhong,
  • Wang Kang,
  • Zhang Xuefeng,
  • Liu Heng

DOI
https://doi.org/10.1155/2024/2293286
Journal volume & issue
Vol. 2024

Abstract

Read online

In order to solve the problem that the current image super-resolution model has too many parameters and high computational complexity, this paper proposes a lightweight hybrid attention network (LHAN). LHAN consists of three parts: shallow feature extraction, lightweight hybrid attention block (LHAB), and upsampling module. LHAB combines multiscale self-attention and large-core attention. In order to make the network lightweight, multiscale self-attention block (MSSAB) improves the self-attention mechanism and uses windows of different sizes for group calculations. At the same time, in large-core attention, we use depth-based attention. Separate convolutions are used to reduce parameters. While keeping the receptive field unchanged, a normal convolution and a dilated convolution are used to replace the large kernel convolution. The four times super-resolution experimental results on five data sets, including Set5 and Set14, show that our proposed method performs well in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Specifically, in the benchmark data set on Urban, compared with SwinIR, the PSNR index of our method is improved by 0.10 dB. In addition, the parameter amount and calculation amount (floating point operations (FLOPs)) of our method are reduced by 315K and 16.4 G, respectively. Our proposed LHAN not only reduces the number of parameters and calculations but also achieves excellent performance in reconstruction quality.