IEEE Access (Jan 2020)
Two-Scale Multimodal Medical Image Fusion Based on Guided Filtering and Sparse Representation
Abstract
Medical image fusion techniques primarily integrate the complementary features of different medical images to acquire a single composite image with superior quality, reducing the uncertainty of lesion analysis. However, the simultaneous extraction of more salient features and less meaningless details from medical images by using multi-scale transform methods is a challenging task. This study presents a two-scale fusion framework for multimodal medical images to overcome the aforementioned limitation. In this framework, a guided filter is used to decompose source images into the base and detail layers to roughly separate the two characteristics of source images, namely, structural information and texture details. To effectively preserve most of the structural information, the base layers are fused using the combined Laplacian pyramid and sparse representation rule, in which an image patch selection-based dictionary construction scheme is introduced to exclude the meaningless patches from the source images and enhance the sparse representation capability of the pyramid-decomposed low-frequency layer. The detail layers are subsequently merged using a guided filtering-based approach, which enhances contrast level via noise filtering as much as possible. The fused base and detail layers are reconstructed to generate the fused image. We experimentally verify the superiority of the proposed method by using two basic fusion schemes and conducting comparison experiments on nine pairs of medical images from diverse modalities. The comparison of the fused results in terms of visual effect and objective assessment demonstrates that the proposed method provides better visual effect with an improved objective measurement because it effectively preserves meaningful salient features without producing abnormal details.
Keywords