Egyptian Journal of Remote Sensing and Space Sciences (Mar 2025)
Spectral–Spatial Adaptive Weighted Fusion and Residual Dense Network for hyperspectral image classification
Abstract
The dense and nearly continuous spectral bands in hyperspectral images result in strong inter-band correlations, which can diminish performance of the model in classification tasks. Moreover, most convolutional neural network-based methods for hyperspectral image classification typically depend on a fixed scale to extract spectral–spatial features, which ignore the detail features of some objects. To address the above issues, a novelty Spectral Spatial Adaptive Weighted Fusion and Residual Dense Network (S2AWF-RDN) is proposed for Hyperspectral image classification. Specifically, the proposed S2AWF-RDN consists of spectral–spatial adaptive weighted fusion module, multi-channel feature concatenation residual dense module, and spatial feature fusion module. Firstly, the spectral information optimization branch is developed to adjust the weights assigned to various spectral channels. Similarly, the spatial information optimization branch is developed to adjust the weights for different spatial regions. Secondly, to obtain rich spectral spatial information from different levels, multi-channel feature concatenation residual dense module has been proposed. In addition, a multi-channel feature concatenation block is designed guiding the model to extract spectral spatial information at different scales. Finally, spatial feature fusion module is introduced to retain more spatial information. The experimental outcomes illustrate that the proposed network model exhibits superior classification performance on three renowned hyperspectral image datasets. Furthermore, the efficacy of the proposed network model is further corroborated through comparative and ablation studies.