APL Machine Learning (Jun 2023)

Analysis of VMM computation strategies to implement BNN applications on RRAM arrays

  • Vivek Parmar,
  • Sandeep Kaur Kingra,
  • Shubham Negi,
  • Manan Suri

DOI
https://doi.org/10.1063/5.0139583
Journal volume & issue
Vol. 1, no. 2
pp. 026108 – 026108-11

Abstract

Read online

The growing interest in edge-AI solutions and advances in the field of quantized neural networks have led to hardware efficient binary neural networks (BNNs). Extreme BNNs utilize only binary weights and activations, making them more memory efficient. Such networks can be realized using exclusive-NOR (XNOR) gates and popcount circuits. The analog in-memory realization of BNNs utilizing emerging non-volatile memory devices has been widely explored recently. However, most realizations typically use 2T-2R synapses, resulting in sub-optimal area utilization. In this study, we investigate alternate computation mapping strategies to realize BNN using selectorless resistive random access memory arrays. A new differential computation scheme that shows a comparable performance with the well-established XNOR computation strategy is proposed. Through extensive experimental characterization, BNN implementation using a non-filamentary bipolar oxide-based random access memory device-based crossbar is demonstrated for two datasets: (i) experimental characterization was performed on a thermal-image based Rock-Paper-Scissors dataset to analyze the impact of sneak-paths with real-hardware experiments. (ii) Large-scale BNN simulations on the Fashion-MNIST dataset with multi-level cell characteristics of non-filamentary devices are performed to demonstrate the impact of device non-idealities.