BMC Medical Imaging (Jul 2024)
MR–CT image fusion method of intracranial tumors based on Res2Net
Abstract
Abstract Background Information complementarity can be achieved by fusing MR and CT images, and fusion images have abundant soft tissue and bone information, facilitating accurate auxiliary diagnosis and tumor target delineation. Purpose The purpose of this study was to construct high-quality fusion images based on the MR and CT images of intracranial tumors by using the Residual-Residual Network (Res2Net) method. Methods This paper proposes an MR and CT image fusion method based on Res2Net. The method comprises three components: feature extractor, fusion layer, and reconstructor. The feature extractor utilizes the Res2Net framework to extract multiscale features from source images. The fusion layer incorporates a fusion strategy based on spatial mean attention, adaptively adjusting fusion weights for feature maps at each position to preserve fine details from the source images. Finally, fused features are input into the feature reconstructor to reconstruct a fused image. Results Qualitative results indicate that the proposed fusion method exhibits clear boundary contours and accurate localization of tumor regions. Quantitative results show that the method achieves average gradient, spatial frequency, entropy, and visual information fidelity for fusion metrics of 4.6771, 13.2055, 1.8663, and 0.5176, respectively. Comprehensive experimental results demonstrate that the proposed method preserves more texture details and structural information in fused images than advanced fusion algorithms, reducing spectral artifacts and information loss and performing better in terms of visual quality and objective metrics. Conclusion The proposed method effectively combines MR and CT image information, allowing the precise localization of tumor region boundaries, assisting clinicians in clinical diagnosis.
Keywords