Jisuanji kexue (Apr 2022)

Infrared and Visible Image Fusion Network Based on Optical Transmission Model Learning

  • YAN Min, LUO Xiao-qing, ZHANG Zhan-cheng

DOI
https://doi.org/10.11896/jsjkx.210200174
Journal volume & issue
Vol. 49, no. 4
pp. 215 – 220

Abstract

Read online

The fusion of infrared and visible images can obtain more comprehensive and rich information.Because there is no ground truth reference image, existing fusion networks simply try to find a balance between the two modes as much as possible.Due to the lack of ground truth label in existing data sets, supervised learning methods can not be directly applied to image fusion.In this paper, a multimode image synthesizing method based on the ambient light transmission model is proposed.Based on the NYU-Depth labeled data set and its depth annotation information, a set of infrared and visible multi-mode pairs with their ground truth fusion images is synthesized.The edge loss function and detail loss function are introduced into the conditional GAN, and the network is trained with end-to-end manner over the synthesized multi-modal image data set.Finally a fusion network is obtained.The trained network can make the fused image retain the details of the visible image and the characteristics of the infrared image, and sharpen the boundary of thermal targets in the infrared image.Compared with the state-of-the-art methods including IFCNN, DenseFuse, and FuionGAN on open TNO benchmark data set, the effectiveness of the proposed method is verified with subjective and objective image quality evalution.

Keywords