IEEE Access (Jan 2024)

Transformers: A Security Perspective

  • Banafsheh Saber Latibari,
  • Najmeh Nazari,
  • Muhtasim Alam Chowdhury,
  • Kevin Immanuel Gubbi,
  • Chongzhou Fang,
  • Sujan Ghimire,
  • Elahe Hosseini,
  • Hossein Sayadi,
  • Houman Homayoun,
  • Soheil Salehi,
  • Avesta Sasan

DOI
https://doi.org/10.1109/ACCESS.2024.3509372
Journal volume & issue
Vol. 12
pp. 181071 – 181105

Abstract

Read online

The Transformers architecture has recently emerged as a revolutionary paradigm in the field of deep learning, particularly excelling in Natural Language Processing (NLP) and Computer Vision (CV) applications. Despite its success, the security implications of Transformers have not been comprehensively explored, encompassing a broad spectrum of both hardware and software vulnerabilities. This paper aims to address this critical gap by conducting an extensive exploration of security challenges confronting Transformers from both software and hardware perspectives. While software-related concerns like adversarial attacks, private inference, and watermarking have been studied, the paper sheds light on previously underexplored hardware vulnerabilities such as trojans and side-channel attacks. By unraveling the intricacies of these hardware threats, the study aims to contribute to a comprehensive understanding of Transformer security. It presents an in-depth analysis of recent advancements in the security of Transformers. Additionally, it outlines existing challenges and forecasts future research trends, offering insights for researchers and practitioners aiming for the secure and resilient design and deployment of Transformers. The survey categorizes different attacks and defenses related to Transformers, helping researchers identify gaps and opportunities in this area. Furthermore, it defines a roadmap for a unified security framework, serving as a foundational starting point for developers seeking to implement robust security measures.

Keywords