IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

<monospace>CroMoDa</monospace>: Unsupervised Oriented SAR Ship Detection via Cross-Modality Distribution Alignment

  • Xi Chen,
  • Zhirui Wang,
  • Wenhao Wang,
  • Xinyi Xie,
  • Jian Kang,
  • Ruben Fernandez-Beltran

DOI
https://doi.org/10.1109/JSTARS.2024.3420901
Journal volume & issue
Vol. 17
pp. 11899 – 11914

Abstract

Read online

Most state-of-the-art synthetic aperture radar (SAR) ship detection methods based on deep learning require large amounts of labeled data for network training. However, the annotation process requires significant manpower and resources especially for SAR images, since relevant background knowledge should be necessary for the annotators. Considering the available optical imagery datasets with labels, we propose an unsupervised oriented SAR ship detection method based on cross-modality distribution alignment, termed as CroMoDa. It consists of four components: 1) image-level feature alignment; 2) low-level feature despeckling; 3) cross-modality pseudo-label self-training; and 4) cross-modality object alignment. By aligning the multilevel feature distributions, modality-invariant features across the two imagery modalities can be learned. Considering speckle noise and other interferences in SAR images, the proposed loss term conducted on low-level features can enhance the object information to improve the detection accuracy. Moreover, the proposed pseudo-label self-training can better generate oriented SAR ship annotations than the other methods, which facilitates learning more discriminative features for SAR ship instances. With two optical–SAR dataset configurations, the proposed method is effectively evaluated by comparing to the other state of the arts, which demonstrates the great potential for SAR ship detection in real applications.

Keywords