IEEE Access (Jan 2023)

StAIn: Stealthy Avenues of Attacks on Horizontally Collaborated Convolutional Neural Network Inference and Their Mitigation

  • Adewale A. Adeyemo,
  • Jonathan J. Sanderson,
  • Tolulope A. Odetola,
  • Faiq Khalid,
  • Syed Rafay Hasan

DOI
https://doi.org/10.1109/ACCESS.2023.3241096
Journal volume & issue
Vol. 11
pp. 10520 – 10534

Abstract

Read online

With significant potential improvement in device-to-device (D2D) communication due to improved wireless link capacity (e.g., 5G and NextG systems), a collaboration of multiple edge devices (called horizontal collaboration (HC)) is becoming a reality for real-time Edge Intelligence (EI). The distributed nature of HC offers an advantage against traditional adversarial attacks because the adversary does not have access to the entire deep learning architecture (DLA). Due to the involvement of multiple untrusted edge devices in HC environment, the possibility of malicious devices cannot be eliminated. In this paper, we unearth some attacks that are very effective and stealthy even when the attacker has minimal knowledge of the DLA as is the case in HC-based DLA. We are also providing novel filtering methods to mitigate such attacks. Our novel attacks leverage local information available on output feature maps (FMs) of a targeted edge device to modify the regular adversarial attacks (e.g. Fast Gradient Signed Method (FGSM) and Jacobian-based Saliency Map Attack (JSMA)). Similarly, a customized convolutional neural network (CNN) based filter is empirically designed, developed, and tested. Four different CNN models (LeNet, CapsuleNet, MiniVGGNet, and VGG16) are used to validate the proposed attacks and defense methodologies. Our three attacks on four different CNN models (with two variations of each attack) show a substantial accuracy drop of 62% on average. The proposed filtering approach is able to mitigate the attack by recovering the actual accuracy back to 75.1% on average. To the best of our knowledge, this is the first work that investigates the security vulnerability of DLA in the HC environment, and all three of our attacks are scalable and agnostic to the partition location within the DLA.

Keywords