IEEE Access (Jan 2022)

MMI-Fuse: Multimodal Brain Image Fusion With Multiattention Module

  • Zhenghe Shi,
  • Chuanwei Zhang,
  • Dan Ye,
  • Peilin Qin,
  • Rui Zhou,
  • Lei Lei

DOI
https://doi.org/10.1109/ACCESS.2022.3163260
Journal volume & issue
Vol. 10
pp. 37200 – 37214

Abstract

Read online

Medical imaging plays a pivotal role in the clinical diagnosis of brain disease. There are many imaging methods to detect the state of tissues in the brain. While these imaging methods have advantages, they also have shortcomings. For example, magnetic resonance imaging (MRI) contains structural information but no functional characteristics of tissue, while positron emission tomography (PET) possesses functional characteristics but no structural information. The attention mechanism has been widely used in image fusion tasks, such as fusion of infrared and visible images and medical images. However, those attention models lack a balance mechanism for multimodal image features, affecting the final fusion performance. This paper proposes an end-to-end multimodal brain image fusion framework, MMI-fuse. Specifically, we first apply an autoencoder to extract the features of source images. Then, an information preservation weighted channel spatial attention model (ICS) is proposed to fuse the image features. We set an adaptive weight according to the information preservation degree of features. Finally, we use a decoder model to restructure the fused medical image. The proposed method increased the quality of fused images and decreased the fusion time effectively by the help of the improved attention model and encoder-decoder structure. To validate the performance of the proposed method, we collected 1590 pairs of multimodal brain images from the Harvard dataset and performed extensive experiments. Seven methods and five metrics were selected for the comparison experiments. The results demonstrate that the proposed method achieved notable performance on both the visual quality and objective metric score among these seven approaches. Moreover, the proposed method takes the least time among all compared methods.

Keywords