Journal of King Saud University: Computer and Information Sciences (Oct 2023)

AFCANet: An adaptive feature concatenate attention network for multi-focus image fusion

  • Shuaiqi Liu,
  • Weijian Peng,
  • Yali Liu,
  • Jie Zhao,
  • Yonggang Su,
  • Yudong Zhang

Journal volume & issue
Vol. 35, no. 9
p. 101751

Abstract

Read online

For multi-focus image fusion, the existing deep learning based methods cannot effectively learn the texture features and semantic information of the source image to generate high-quality fused images. Thus, we develop a new adaptive feature concatenate attention network named AFCANet, which adaptively learns cross-layer features and retains the texture features and semantic information of images to generate visually appealing fully focused images. In AFCANet, the encoder-decoder network is used as the backbone network. In the unsupervised training stage, an adaptive cross-layer skip connection mode is designed, and a cross-layer adaptive coordinate attention module is built to acquire meaningful information from the image along with ignoring unimportant information to obtain a better image fusion effect. In addition, in the middle of the encoder-decoder network, we also introduce an effective channel attention module to fully learn the output of the encoder, and accelerate network convergence. In the inference stage, we apply the pixel-based spatial frequency fusion rules to fuse the adaptive features learned by the encoder, which can successfully combine the texture and semantic information of the image and produce a more precise decision map. Extensive experiments on public datasets and the HBU-CVMDSP dataset show that our AFCANet can effectively improve the accuracy of the decision map in the focus and defocus regions, as well as improve the ability to retain the abundant details and edge features of the source image.

Keywords