Applied Sciences (Feb 2024)

A Multichannel-Based Deep Learning Framework for Ocean SAR Scene Classification

  • Chengzu Bai,
  • Shuo Zhang,
  • Xinning Wang,
  • Jiaqiang Wen,
  • Chong Li

DOI
https://doi.org/10.3390/app14041489
Journal volume & issue
Vol. 14, no. 4
p. 1489

Abstract

Read online

High-resolution synthetic aperture radars (SARs) are becoming an indispensable environmental monitoring system to capture the important geophysical phenomena on the earth and sea surface. However, there is a lack of comprehensive models that can orchestrate such large-scale datasets from numerous satellite missions such as GaoFen-3 and Sentinel-1. In addition, these SAR images of different ocean scenes need to convey a variety of high-level classification features in oceanic and atmospheric phenomena. In this study, we propose a multichannel neural network (MCNN) that supports oceanic SAR scene classification for limited oceanic data samples according to multi-feature fusion, data augmentation, and multichannel feature extraction. To exploit the multichannel semantics of SAR scenes, the multi-feature fusion module effectively combines and reshapes the spatiotemporal SAR images to preserve their structural properties. This fine-grained feature augmentation policy is extended to improve the data quality so that the classification model is less vulnerable to both small- and large-scale data. The multichannel feature extraction also aggregates different oceanic features convolutionally extracted from ocean SAR scenes to improve the classification accuracy of oceanic phenomena with different scales. Through extensive experimental analysis, our MCNN framework has demonstrated a commendable classification performance, achieving an average precision rate of 96%, an average recall rate of 95%, and an average F-score of 95% across ten distinct oceanic phenomena. Notably, it surpasses two state-of-the-art classification techniques, namely, AlexNet and CMwv, by margins of 23.7% and 18.3%, respectively.

Keywords