IEEE Access (Jan 2020)

MGMDcGAN: Medical Image Fusion Using Multi-Generator Multi-Discriminator Conditional Generative Adversarial Network

  • Jun Huang,
  • Zhuliang Le,
  • Yong Ma,
  • Fan Fan,
  • Hao Zhang,
  • Lei Yang

DOI
https://doi.org/10.1109/ACCESS.2020.2982016
Journal volume & issue
Vol. 8
pp. 55145 – 55157

Abstract

Read online

In this paper, we propose a novel end-to-end model for fusing medical images characterizing structural information, i.e., IS, and images characterizing functional information, i.e., IF, of different resolutions, by using a multi-generator multi-discriminator conditional generative adversarial network (MGMDcGAN). In the first cGAN, the generator aims to generate a real-like fused image based on a specifically designed content loss to fool two discriminators, while the discriminators aim to distinguish the structure differences between the fused image and source images. On this basis, we employ the second cGAN with a mask to enhance the information of dense structure in the final fused image, while preventing the functional information from being weakened. Consequently, the final fused image is forced to concurrently keep the structural information in IS and the functional information in IF. In addition, as a unified method, MGMDcGAN can be applied to different kinds of medical image fusion, i.e., MRI-PET, MRI-SPECT, and CT-SPECT, where MRI and CT are two kinds of IS of high resolution, PET and SPECT are typical kinds of IF of low resolution. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our MGMDcGAN over the state-of-the-art.

Keywords