IEEE Access (Jan 2024)
Remote Sensing Image Pansharpening Using Deep Internal Learning With Residual Double-Attention Network
Abstract
In recent years, deep convolutional neural networks (CNNs) have significantly improved pansharpening performance compared to traditional methods. However, existing CNN-based methods for pansharpening still lack spatial detail and suffer from spectral distortion. To address this problem, this study proposed a deep learning network based on channel and spatial attention mechanisms to enhance the spatial resolution and decrease the spectral distortion of a pansharpened image. The proposed network consists of a shallow feature extraction (SFE) unit to exploit the spatial and spectral features of the panchromatic (PAN) and multispectral (MS) input images. Furthermore, a double-attention feature fusion (DAFF) module, which consists of residual double-attention modules (RDAMs) with long and short skip connections, was designed to improve the spatial resolution and alleviate the spectral distortion of the output image. In the experiments, we utilized a deep internal learning strategy in which training data were extracted from a large scene of the observed image itself. We evaluated the effectiveness of the proposed method using WorldView-3, Spot-7, Pleiades, and Geoeye datasets. The performance of the proposed method was compared with some existing deep learning-based pansharpening techniques: deep residual pansharpening neural network (DRPNN), residual network (ResNet), residual dense model for pansharpening network (RDMPSnet), symmetric skipped connection convolutional neural network (SSC-CNN), and triplet attention network with information interaction (TANI). The experimental results revealed that the proposed method outperformed all the other methods in terms of quality evaluation metrics and visualization.
Keywords