Memories - Materials, Devices, Circuits and Systems (Jul 2023)
Exploiting device-level non-idealities for adversarial attacks on ReRAM-based neural networks
Abstract
Resistive memory (ReRAM) or memristor devices offer the prospect of more efficient computing. While memristors have been used for a variety of computing systems, their usage has gained significant popularity in the domain of deep learning. Weight matrices in deep neural networks can be mapped to crossbar architectures with memristive junctions, generally resulting in superior performance and energy efficiency. However, the nascent nature of ReRAM technology is directly associated with the presence of inherent non-idealities in the ReRAM devices currently available. Deep neural networks have already been shown to be susceptible to adversarial attacks, often by targeting vulnerabilities in the networks’ internal representation of input data. In this paper, we explore the causal relationship between device-level non-idealities in ReRAM devices and the classification performance of memristor-based neural network accelerators. Specifically, our aim is to generate images which bypass adversarial defense mechanisms in software neural networks but trigger non-trivial performance discrepancies in ReRAM-based neural networks. To this end, we have proposed a framework to generate adversarial images in the hypervolume between the two decision boundaries, thereby leveraging non-ideal device behavior for performance detriment. We employ state-of-the-art tools in explainable artificial intelligence to characterize our adversarial image samples, and derive a new metric to quantify susceptibility to adversarial attacks at the pixel and device-levels.