IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2022)

A Deep-Shallow Fusion Network With Multidetail Extractor and Spectral Attention for Hyperspectral Pansharpening

  • Yu-Wei Zhuo,
  • Tian-Jing Zhang,
  • Jin-Fan Hu,
  • Hong-Xia Dou,
  • Ting-Zhu Huang,
  • Liang-Jian Deng

DOI
https://doi.org/10.1109/JSTARS.2022.3202866
Journal volume & issue
Vol. 15
pp. 7539 – 7555

Abstract

Read online

Hyperspectral (HS) pansharpening aims at fusing a low-resolution HS image with a high-resolution panchromatic (PAN) image to obtain a HS image with both higher spectral and spatial resolutions. However, existing HS pansharpening algorithms are mainly based on multispectral pansharpening approaches, which cannot perfectly restore much spectral information in the continuous spectral bands and much broader spectral range, leading to spectral distortion and spatial blur. In this paper, we develop a new hyperspectral pansharpening network architecture (called Hyper-DSNet) to fully preserve latent spatial details and spectral fidelity via a deep-shallow fusion structure with multi-detail extractor and spectral attention. First, to solve the problem of spatial ambiguity, five types of high-pass filter templates are used to fully extract the spatial details of the PAN image, constructing a so-called multi-detail extractor. Then, a multi-scale convolution module and a deep-shallow fusion structure, which reduces parameters by decreasing the number of output channels as the network goes deeper, is utilized sequentially. In final, a spectral attention module is conducted to preserve the spectrum for a wealth of spectral information of HS images. Visual and quantitative experiments on three commonly used simulated datasets and one full-resolution dataset demonstrate the effectiveness and robustness of the proposed Hyper-DSNet against the recent state-of-the-art hyperspectral pansharpening techniques. Ablation studies and discussions further verify our contributions, e.g., better spectral preservation and spatial detail recovery.

Keywords