IEEE Access (Jan 2024)

Layer Decomposition Learning Based on Discriminative Feature Group Split With Bottom-Up Intergroup Feature Fusion for Single Image Deraining

  • Yunseon Jang,
  • Duc-Tai Le,
  • Chang-Hwan Son,
  • Hyunseung Choo

DOI
https://doi.org/10.1109/ACCESS.2024.3407750
Journal volume & issue
Vol. 12
pp. 78024 – 78039

Abstract

Read online

Rain streaks impede image feature extraction, hindering the performance of computer vision algorithms such as pedestrian and lane detection in adverse weather conditions. Image deraining is crucial for enhancing reliability of such algorithms. However, detail and texture information of objects in background areas are often lost during the deraining process due to their structural similarity with rain streaks. To remove rain streaks effectively while preserving image details, we propose a novel layer decomposition learning network (LDLNet) to separate rain streaks and object details in rainy images. LDLNet consists of two parts: the discriminative group feature split (DGFS) and the group feature merging (GFM). DGFS utilizes sparse residual attention modules (SRAM) to capture spatial contextual features of rainy images, enhancing the network’s ability to understand the complex relationships between rain streaks and object details. In addition, DGFS employs the bottom-up intergroup feature fusion (BIFF) approach to aggregate multi-scale context information from continuous SRAMs, facilitating the decomposition of rainy images into discriminative feature groups. Subsequently, GFM integrates these feature groups by concatenating them, preserving the interdependent characteristics of clean backgrounds and rain layers. Experimental results reveal that the proposed approach achieves superior rain removal and detail preservation in both synthetic datasets and real-world rainy images compared to the state-of-the-art rain removal models.

Keywords