IEEE Access (Jan 2021)

Infrared and Visible Image Fusion Method Based on ResNet in a Nonsubsampled Contourlet Transform Domain

  • Ce Gao,
  • Donghao Qi,
  • Yanchao Zhang,
  • Congcong Song,
  • Yi Yu

DOI
https://doi.org/10.1109/ACCESS.2021.3086096
Journal volume & issue
Vol. 9
pp. 91883 – 91895

Abstract

Read online

Although the traditional image fusion method can obtain rich image results, obvious artificial noise and artifacts are often present in the resulting image. Fusion algorithms based on neural networks can avoid the shortcomings of traditional methods, but they are more complex and less flexible. In this study, we proposed a fusion method using the deep residual neural network ResNet152, which can not only effectively suppress artificial noise but also preserve the edge details of the image and improve the efficiency of the neural network. The proposed method is characterized by a multiscale transformation of an infrared image and visible light image in the optimized nonsubsampled contourlet transformation domain, and the deep residual neural network ResNet152 is used to extract the deep features of the low-pass component to guide the fusion of the low-pass component. The bandpass component is fused by taking the modulus maximum. This method can fully retain the global features and structural information of the source image in the result image. Compared to existing fusion methods using public test image sets, the experimental results show that on a subjective level, the fusion method creates sharper depth edges and fewer noise artifacts than traditional fusion methods. From an objective perspective, the average value for the results of the evaluation function is greater than that of other fusion methods.

Keywords