IEEE Access (Jan 2024)

High Efficient Spatial and Radiation Information Mutual Enhancing Fusion Method for Visible and Infrared Image

  • Zongzhen Liu,
  • Yuxing Wei,
  • Geli Huang,
  • Chao Li,
  • Jianlin Zhang,
  • Meihui Li,
  • Dongxu Liu,
  • Xiaoming Peng

DOI
https://doi.org/10.1109/ACCESS.2024.3351774
Journal volume & issue
Vol. 12
pp. 6971 – 6992

Abstract

Read online

Visible and infrared image fusion is an important image enhancement technique that aims to generate high-quality fused images with prominent targets and rich textures in extreme environments. However, most of the current image fusion methods have poor visual perception of the generated fused images due to severe degradation of texture details of the visible light images in scenes with extreme lighting, which seriously affects the application of subsequent advanced vision tasks such as target detection and tracking. To address these challenges, this paper bridges the gap between image fusion and advanced vision tasks by proposing an efficient fusion method for mutual enhancement of spatial and radiometric information. First, we design the gradient residual dense block (LGCnet) to improve the description of fine spatial details in the fusion network. Then, we developed a cross-modal perceptual fusion (CMPF) module to facilitate modal interactions in the network, which effectively enhances the fusion of complementary information between different modalities and reduces redundant learning. Finally, we designed an adaptive light-aware network (ALPnet) to guide the training of the fusion network to facilitate the fusion network to adaptively select more effective information for fusion under different lighting conditions. Extensive experiments show that the proposed fusion approach has competitive advantages over six current state-of-the-art deep-learning methods in highlighting target features and describing the global scene.

Keywords