Jisuanji kexue (May 2022)

Infrared and Visible Image Fusion Based on Feature Separation

  • GAO Yuan-hao, LUO Xiao-qing, ZHANG Zhan-cheng

DOI
https://doi.org/10.11896/jsjkx.210200148
Journal volume & issue
Vol. 49, no. 5
pp. 58 – 63

Abstract

Read online

Although a pair of infrared and visible images captured in the same scene have different modes,they also have shared public information and complementary private information.A complete fusion image can be obtained by learning and integrating above information.Inspired by residual network,in the training stage,each branch is forced to map a label with global features through the interchange and addition of feature-levels among network branches.What’s more,each branch is encouraged to learn the private features of corresponding images.Directly learning the private features of images can avoid designing complex fusion rules and ensure the integrity of feature details.In the fusion stage,the maximum fusion strategy is adopted to fuse the private features,add them to the learned public features at the decoding layer and finally decode the fused image.The model is trained over a multi-focused data set that is synthesized from the NYU-D2 and tested over the real-world TNO data set.Experimental results show that compared with the current mainstream infrared and visible fusion algorithms,the proposed algorithm achieves better results in subjective effects and objective evaluation indicators.

Keywords