Jisuanji kexue (May 2022)
Infrared and Visible Image Fusion Based on Feature Separation
Abstract
Although a pair of infrared and visible images captured in the same scene have different modes,they also have shared public information and complementary private information.A complete fusion image can be obtained by learning and integrating above information.Inspired by residual network,in the training stage,each branch is forced to map a label with global features through the interchange and addition of feature-levels among network branches.What’s more,each branch is encouraged to learn the private features of corresponding images.Directly learning the private features of images can avoid designing complex fusion rules and ensure the integrity of feature details.In the fusion stage,the maximum fusion strategy is adopted to fuse the private features,add them to the learned public features at the decoding layer and finally decode the fused image.The model is trained over a multi-focused data set that is synthesized from the NYU-D2 and tested over the real-world TNO data set.Experimental results show that compared with the current mainstream infrared and visible fusion algorithms,the proposed algorithm achieves better results in subjective effects and objective evaluation indicators.
Keywords