IEEE Access (Jan 2024)

TGBNN: Training Algorithm of Binarized Neural Network With Ternary Gradients for MRAM-Based Computing-in-Memory Architecture

  • Yuya Fujiwara,
  • Takayuki Kawahara

DOI
https://doi.org/10.1109/ACCESS.2024.3476417
Journal volume & issue
Vol. 12
pp. 150962 – 150974

Abstract

Read online

To build Neural Networks (NNs) on edge devices, Binarized Neural Network (BNN) has been proposed on the software side, while Computing-in-Memory (CiM) architecture has been proposed on the hardware side. For use on CiM architecture-based BNN, Magnetic Random Access Memory (MRAM) has been attracting interest thanks to its low power consumption and fast write operation. In this study, we propose a new BNN training algorithm utilizing ternarized gradients (TGBNN) for MRAM-based CiM architecture to enable both training BNN and inference on edge devices. TGBNN has only ternary gradients, binary weights, binary activations, binary inputs in both training and inference phases. In other words, real-valued weights and real-valued gradients, which are necessary on conventional BNNs in the training phase, never appear on our BNN. TGBNN uses three key techniques: ternarized gradients, improved straight through estimator, and stochastic weights update. In addition, to build TGBNN on edge devices, we propose a new MRAM-based CiM architecture. Our MRAM array consists of an MRAM cell-based XNOR gate utilizing Voltage Controlled Magnetic Anisotropy (VCMA) and MRAM cell-based stochastic updating utilizing Spin Orbit Torque (SOT). Owning to our MRAM-based CiM architecture, we can halve the scale of the Multiply-and-Accumulate (MAC) operation circuit in comparison with the conventional method. Lastly, we evaluated TGBNN on our MRAM utilizing the MNIST handwriting dataset. The result showed that the accuracy of TGBNN was only 0.92 % lower than that of regular BNN has the same structure, and the training of TGBNN can converge faster than it of regular BNN. In addition, we found 88.28 % accuracy with ECOC-based learning TGBNN. Therefore, we can build a BNN on the edge side in both inference and training phases owning to TGBNN on our MRAM-based CiM architecture.

Keywords