Scientific Reports (Oct 2024)

A flexible and fast digital twin for RRAM systems applied for training resilient neural networks

  • Markus Fritscher,
  • Simranjeet Singh,
  • Tommaso Rizzi,
  • Andrea Baroni,
  • Daniel Reiser,
  • Maen Mallah,
  • David Hartmann,
  • Ankit Bende,
  • Tim Kempen,
  • Max Uhlmann,
  • Gerhard Kahmen,
  • Dietmar Fey,
  • Vikas Rana,
  • Stephan Menzel,
  • Marc Reichenbach,
  • Milos Krstic,
  • Farhad Merchant,
  • Christian Wenger

DOI
https://doi.org/10.1038/s41598-024-73439-z
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 13

Abstract

Read online

Abstract Resistive Random Access Memory (RRAM) has gained considerable momentum due to its non-volatility and energy efficiency. Material and device scientists have been proposing novel material stacks that can mimic the “ideal memristor” which can deliver performance, energy efficiency, reliability and accuracy. However, designing RRAM-based systems is challenging. Engineering a new material stack, designing a device, and experimenting takes significant time for material and device researchers. Furthermore, the acceptability of the device is ultimately decided at the system level. We see a gap here where there is a need for facilitating material and device researchers with a “push button” modeling framework that allows to evaluate the efficacy of the device at system level during early device design stages. Speed, accuracy, and adaptability are the fundamental requirements of this modelling framework. In this paper, we propose a digital twin (DT)-like modeling framework that automatically creates RRAM device models from device measurement data. Furthermore, the model incorporates the peripheral circuit to ensure accurate energy and performance evaluations. We demonstrate the DT generation and DT usage for multiple RRAM technologies and applications and illustrate the achieved performance of our GPU implementation. We conclude with the application of our modeling approach to measurement data from two distinct fabricated devices, validating its effectiveness in a neural network processing an Electrocardiogram (ECG) dataset and incorporating Fault Aware Training (FAT).

Keywords