IET Computer Vision (Mar 2019)
Image fusion method based on simultaneous sparse representation with non‐subsampled contourlet transform
Abstract
The image fusion method based on sparse representation in the single‐scale image domain has produced better fusion results than the classic methods based on multi‐scale analysis nowadays. However, due to the limited number of dictionary atoms, it is difficult to provide an accurate description for image details in the sparse‐representation‐based image fusion methods, and it requires a lot of time. A novel dictionary is constructed with non‐subsampled contourlet transform and sparse representation by using the proposed simultaneous strategy. Then the novel dictionary could combine the sparsity attribute of the learning dictionary with a multi‐scale feature of non‐subsampled contourlet transform. Moreover, the simultaneous strategy is combined with this novel dictionary so that sparse coefficients can be represented with the same dictionary atoms and thus they can be compared in a reasonable and accurate way. Finally, the image fusion method along with this novel dictionary is proposed and named non‐subsampled contourlet transform (NSCT)–simultaneous sparse representation (SSR). Experimental results show that the proposed fusion method NSCT–SSR, with its more excellent fusion effect and better anti‐noise capability, outperforms the existing fusion methods, which are based on both multi‐scale domain and sparse representation in the single‐scale image domain.
Keywords