IEEE Access (Jan 2024)

Adversarial Attacks Against Binary Similarity Systems

  • Gianluca Capozzi,
  • Daniele Cono D'Elia,
  • Giuseppe Antonio Di Luna,
  • Leonardo Querzoni

DOI
https://doi.org/10.1109/ACCESS.2024.3488204
Journal volume & issue
Vol. 12
pp. 161247 – 161269

Abstract

Read online

Binary analysis has become essential for software inspection and security assessment. As the number of software-driven devices grows, research is shifting towards autonomous solutions using deep learning models. In this context, a hot topic is the binary similarity problem, which involves determining whether two assembly functions originate from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted (w.r.t. similarity goals) attacks performed by black-box and white-box attackers. We extensively test three state-of-the-art binary similarity solutions against (i) a black-box greedy attack that we enrich with a new search heuristic, terming it Spatial Greedy, and (ii) a white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers. Interestingly, the target models are more susceptible to black-box attacks than white-box ones, exhibiting greater resilience in the case of targeted attacks.

Keywords