Applied Sciences (Sep 2022)

Evading Logits-Based Detections to Audio Adversarial Examples by Logits-Traction Attack

  • Songshen Han,
  • Kaiyong Xu,
  • Songhui Guo,
  • Miao Yu,
  • Bo Yang

DOI
https://doi.org/10.3390/app12189388
Journal volume & issue
Vol. 12, no. 18
p. 9388

Abstract

Read online

Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal feature of adversarial examples are essential to prevent potential attacks. Previous research has shown classic adversarial examples have different logits distribution compared to normal speech. This paper proposes a logit-traction attack to eliminate this difference at the statistical level. Experiments on the LibriSpeech dataset show that the proposed attack reduces the accuracy of the LOGITS NOISE detection to 52.1%. To further verify the effectiveness of this approach in attacking detection based on logits, three different features quantifying the dispersion of logits are constructed in this paper. Furthermore, a richer target sentence is adopted for experiments. The results indicate that these features can detect baseline adversarial examples with an accuracy of about 90% but cannot effectively detect Logits-Traction adversarial examples, proving that Logits-Traction attack can evade the logits-based detection method.

Keywords