IET Image Processing (Aug 2024)

ADF‐Net: Attention‐guided deep feature decomposition network for infrared and visible image fusion

  • Sen Shen,
  • Taotao Zhang,
  • Haidi Dong,
  • ShengZhi Yuan,
  • Min Li,
  • RenKai Xiao,
  • Xiaohui Zhang

DOI
https://doi.org/10.1049/ipr2.13134
Journal volume & issue
Vol. 18, no. 10
pp. 2774 – 2787

Abstract

Read online

Abstract To effectively enhance the ability to acquire information by making full use of the complementary features of infrared and visible images, the widely used image fusion algorithm is faced with challenges such as information loss and image blurring. In response to this issue, the authors propose a dual‐branch deep hierarchical fusion network (ADF‐Net) guided by an attention mechanism. Initially, the attention convolution module extracts the shallow features of the image. Subsequently, a dual‐branch deep decomposition feature extractor is introduced, where in the transformer encoder block (TEB) employs remote attention to process low‐frequency global features, while the CNN encoder block (CEB) extracts high‐frequency local information. Ultimately, the global fusion layer based on TEB and the local fusion layer based on CEB produce the fused image through the encoder. Multiple experiments demonstrate that ADF‐Net excels in various aspects by utilizing two‐stage training and an appropriate loss function for training and testing.

Keywords