Jisuanji kexue yu tansuo (Jul 2024)

Feature Refinement and Multi-scale Attention for Transformer Image Denoising Network

  • YUAN Heng, GENG Yikun

DOI
https://doi.org/10.3778/j.issn.1673-9418.2308091
Journal volume & issue
Vol. 18, no. 7
pp. 1838 – 1851

Abstract

Read online

In order to enhance the relevance of global context information, strengthen the attention to multi-scale features, improve the image denoising effect while preserving the details to the greatest extent, a Transformer based feature refinement and multi-scale attention image denoising network (TFRADNet) is proposed. The network not only uses Transformer in the codec part to solve the long-term dependence problem of large-scale images and improve the efficiency of model noise reduction, but also adds a position awareness layer after the up-sampling operation to enhance the network’s perception ability of pixel positions in the feature map. To cope with Transformer’s neglect of spatial relationships among pixels, which may result in local detail distortion, a feature refinement block (FRB) is designed at feature reconstruction stage. A serial structure is used to introduce nonlinear transformations layer by layer, to enhance the recognition of local image features with complex noise levels. Meanwhile, a multi-scale attention block (MAB) is designed, which adopts a parallel double-branch structure to jointly model spatial attention and channel attention, effectively capturing and weighting image features of different scales, and improving the model’s perception ability of multi-scale features. Experimental results on real noise datasets SIDD, DND and RNI15 show that TFRADNet can take into account global information and local details, and has stronger noise suppression ability and robustness than other advanced methods.

Keywords