Remote Sensing (Nov 2023)

Adversarial Attacks in Underwater Acoustic Target Recognition with Deep Learning Models

  • Sheng Feng,
  • Xiaoqian Zhu,
  • Shuqing Ma,
  • Qiang Lan

DOI
https://doi.org/10.3390/rs15225386
Journal volume & issue
Vol. 15, no. 22
p. 5386

Abstract

Read online

Deep learning models can produce unstable results by introducing imperceptible perturbations that are difficult for humans to recognize. This can have a significant impact on the accuracy and security of deep learning applications due to their poorly understood interpretability. As a field critical to security research, this problem clearly exists in underwater acoustic target recognition for ocean sensing. To address this issue, this article investigates the reliability of state-of-the-art deep learning models by exploring adversarial attack methods that add small, exquisite perturbations on acoustic Mel-spectrograms to generate adversarial spectrograms. Experimental results based on real-world datasets reveal that these models can be forced to learn unexpected features when subjected to adversarial spectrograms, resulting in significant accuracy drops. Specifically, when employing the iterative attack method, the overall accuracy of all models experiences a significant decrease of approximately 70% for two datasets under stronger perturbations.

Keywords