IET Image Processing (Feb 2023)

Continuous learning deraining network based on residual FFT convolution and contextual transformer module

  • Zhijia Zhang,
  • Sinan Wu,
  • Xinming Peng,
  • Wanting Wang,
  • Rui Li

DOI
https://doi.org/10.1049/ipr2.12669
Journal volume & issue
Vol. 17, no. 3
pp. 747 – 760

Abstract

Read online

Abstract Single image deraining work has attracted a lot of attention. So an effective single image deraining network model, RFCTNet, is designed. Specifically, the residual Fast Fourier Transform convolution module is used as a baseline for the deraining network. The advantage is its ability to capture both long and short‐term information interactions and to integrate high and low‐frequency background information for delivery. And a novel Transformer module, the Contextual Transformer module, is introduced. Contextual knowledge of the rain streaks neighbourhood is learned to improve the recovery of textures and information in the image background based on focusing on the rain pattern features. Because of the overlap between rain streaks, removing them in one stage is difficult, so a multi‐stage progressive rain removal method is used. Considering the catastrophic forgetting problem of deep learning, a continuous learning method, PIGWM, is adopted, which is c and enables the network model to memorize of the previous data set. Also, the rain removal method RFCTNet has been proven through extensive experiments to outperform other advanced methods' evaluation metrics on both synthetic and real datasets. The source code can be found on https://github.com/SUTwu/RFCTNet

Keywords