IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)
A Unified Super-Resolution Framework of Remote-Sensing Satellite Images Classification Based on Information Fusion of Novel Deep Convolutional Neural Network Architectures
Abstract
Land-use and land-cover (LULC) classification is an active research challenge in the area of remotely sensed satellite images due to critical applications, such as resource management and agriculture. Deep learning has recently shown a significant improvement in LULC classification using satellite images; however, complex and similar patterns of the images make the classification process more challenging. This article proposes a new information-fused framework for LULC classification from the remotely sensed imaging data. The proposed framework consists of two phases: training and testing. An augmentation process was conducted in the training phase to resolve the imbalance issue. In the next step, two novel convolutional neural network architectures are proposed based on six residual blocks named ResSAN6 and six inverted blocks named RS-IRSAN. The designed models are trained from scratch, whereas the hyperparameters are initialized using the Bayesian optimization algorithm. In the next phase, testing has been performed on the trained models. Testing set images were employed, and deep features from the self-attention layer were extracted. A novel mutual information-based serial fusion approach is proposed that combines both models' features. Also, the variation in the features is removed using median normalization. Furthermore, the feature fusion's computational time and precision rates are improved, which is further optimized using an arithmetic optimization (AO) algorithm. The best information features are selected and finally classified using a shallow wide neural network by employing AOrk. The experimental process of the proposed framework has been performed on three datasets, such as RSI-CB128, WHU-RS19, and NWPU_RESISC45, and achieved an accuracy of 95.7, 97.5, and 92.0%, respectively. Comparing the results with recent related works, the proposed framework shows improved accuracy and precision rates.
Keywords