Entropy (Dec 2022)

BPDGAN: A GAN-Based Unsupervised Back Project Dense Network for Multi-Modal Medical Image Fusion

  • Shangwang Liu,
  • Lihan Yang

DOI
https://doi.org/10.3390/e24121823
Journal volume & issue
Vol. 24, no. 12
p. 1823

Abstract

Read online

Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron emission computed tomography (PET) and single-photon emission computed tomography (SPECT) with anatomical modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) to supplement the complementary information. Meanwhile, fusing two anatomical images (like CT-MRI) is often required to replace single MRI, and the fused images can improve the efficiency and accuracy of clinical diagnosis. To this end, in order to achieve high-quality, high-resolution and rich-detail fusion without artificial prior, an unsupervised deep learning image fusion framework is proposed in this paper. It is named the back project dense generative adversarial network (BPDGAN) framework. In particular, we construct a novel network based on the back project dense block (BPDB) and convolutional block attention module (CBAM). The BPDB can effectively mitigate the impact of black backgrounds on image content. Conversely, the CBAM improves the performance of BPDGAN on the texture and edge information. To conclude, qualitative and quantitative experiments are tested to demonstrate the superiority of BPDGAN. In terms of quantitative metrics, BPDGAN outperforms the state-of-the-art comparisons by approximately 19.58%, 14.84%, 10.40% and 86.78% on AG, EI, Qabf and Qcv metrics, respectively.

Keywords