IEEE Access (Jan 2023)

A Target-Aware Fusion Framework for Infrared and Visible Images

  • Yingmei Zhang,
  • Hyo Jong Lee

DOI
https://doi.org/10.1109/ACCESS.2023.3246481
Journal volume & issue
Vol. 11
pp. 33666 – 33681

Abstract

Read online

Infrared and visible image fusion aims to obtain an image that can retain the prominent infrared target and the detailed texture information from the source images. Most scale filter-based decomposition methods mainly attempt to extract more detailed features by increasing decomposition layers, and they ignore fully considering the intrinsic properties of the original images and the influence of noises. To solve this issue and obtain better fusion & detection performance, this paper proposes a fusion method via a globalet filter and detail enhancement model to construct a target-aware fusion framework. The globalet filter is first comprehensively considered from three perspectives based on global point-to-point optimization: target brightness, removal of gradients of large-small scales (noise reduction), and preservation of texture details. Mathematically, three limited error measure equations are constructed between the target output and the source image in the shape of the $L_{2}$ -norm and the first-order derivative difference. Next, a weighted average operator and a detail enhancement model are proposed to guide the corresponding sub-layers. This model creates connections between the detail layers and the input images so that the fused detail layer contains the best pixel region from these images to the greatest extent possible after the “maximum absolute” rule. As a result, the fused image is reconstructed by adding the previously obtained sub-images. Extensive experimental results demonstrate that our method outperforms state-of-the-art fusion methods, particularly in highlighting infrared targets, preserving substantial details, and producing average detection accuracy of exceeds 98%.

Keywords