IEEE Access (Jan 2022)
Deep Learning L2 Norm Fusion for Infrared & Visible Images
Abstract
Fusion is a strategy for collecting data from multiple images in order to improve information quality. Infrared images can recognise objects from their surroundings depending mostly on radiation disparity, which works better in all weather conditions as well as irrespective of whether it is day or night. Visible images can integrate texture information with great visual precision and in detail that matches with human visual system. Integrating the benefits of thermal radiation information with precise visual information from infrared and visible modalities is a good idea. The presented algorithm utilises the $\ell _{2} $ norm and a combination of residual networks for combining the complementary information from both image modalities. The encoder consist of convolutional layers with selected residual connections in which the output of each layer is associated with each other layer. The $\ell _{2} $ norm approach is then used to fuse the two featuremaps. At last, decoder recreates the fused image. The large mutual information value of 14.85084 indicates more complementary information retained in the fused image than in the infrared and visible images. The large entropy value of 6.92286 indicates more information content in the fused image and the fused image is equipped with more edge information. The proposed architecture collect more pixel values from both infrared and visible image and the fused image looks more natural as it contain more textual content. The proposed system accomplishes a noteworthy performance with the existing models.
Keywords