IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

A Novel Bottleneck Residual and Self-Attention Fusion-Assisted Architecture for Land Use Recognition in Remote Sensing Images

  • Ameer Hamza,
  • Muhammad Attique Khan,
  • Shams ur Rehman,
  • Mohammed Al-Khalidi,
  • Ahmed Ibrahim Alzahrani,
  • Nasser Alalwan,
  • Anum Masood

DOI
https://doi.org/10.1109/JSTARS.2023.3348874
Journal volume & issue
Vol. 17
pp. 2995 – 3009

Abstract

Read online

The massive yearly population growth is causing hazards to spread swiftly around the world and have a detrimental impact on both human life and the world economy. By ensuring early prediction accuracy, remote sensing enters the scene to safeguard the globe against weather-related threats and natural disasters. Convolutional neural networks (CNNs), which are a reflection of deep learning, have been used more recently to reliably identify land use in remote sensing images. This work proposes a novel bottleneck residual and self-attention fusion-assisted architecture for land use recognition from remote sensing images. First, we proposed using the fast neural approach to generate cloud-effect satellite images. In neural style, we proposed a 5-layered residual block CNN to estimate the loss of neural-style images. After that, we proposed two novel architectures, named 3-layered bottleneck CNN architecture and 3-layered bottleneck self-attention CNN architecture, for the classification of land use images. Training has been conducted on both proposed and original neural-style generated datasets for both architectures. Subsequently, features are extracted from the deep layers and merged employing an innovative serial approach based on weighted entropy. By removing redundant and superfluous data, a novel chimp optimization technique is applied to the fused features in order to further refine them. In conclusion, selected features are classified using the help of neural network classifiers. The experimental procedure yielded respective accuracy rates of 99.0% and 99.4% when applied to both datasets. When evaluated in comparison to state-of-the-art methods, the outcomes generated by the proposed framework demonstrated enhanced precision and accuracy.

Keywords