BMC Medical Imaging (Sep 2024)

Multimodal medical image fusion based on interval gradients and convolutional neural networks

  • Xiaolong Gu,
  • Ying Xia,
  • Jie Zhang

DOI
https://doi.org/10.1186/s12880-024-01418-x
Journal volume & issue
Vol. 24, no. 1
pp. 1 – 15

Abstract

Read online

Abstract Many image fusion methods have been proposed to leverage the advantages of functional and anatomical images while compensating for their shortcomings. These methods integrate functional and anatomical images while presenting physiological and metabolic organ information, making their diagnostic efficiency far greater than that of single-modal images. Currently, most existing multimodal medical imaging fusion methods are based on multiscale transformation, which involves obtaining pyramid features through multiscale transformation. Low-resolution images are used to analyse approximate image features, and high-resolution images are used to analyse detailed image features. Different fusion rules are applied to achieve feature fusion at different scales. Although these fusion methods based on multiscale transformation can effectively achieve multimodal medical image fusion, much detailed information is lost during multiscale and inverse transformation, resulting in blurred edges and a loss of detail in the fusion images. A multimodal medical image fusion method based on interval gradients and convolutional neural networks is proposed to overcome this problem. First, this method uses interval gradients for image decomposition to obtain structure and texture images. Second, deep neural networks are used to extract perception images. Three methods are used to fuse structure, texture, and perception images. Last, the images are combined to obtain the final fusion image after colour transformation. Compared with the reference algorithms, the proposed method performs better in multiple objective indicators of $$Q_{EN}$$ Q EN , $$Q_{NIQE}$$ Q NIQE , $$Q_{SD}$$ Q SD , $$Q_{SSEQ}$$ Q SSEQ and $$Q_{TMQI}$$ Q TMQI .

Keywords