APL Materials (Aug 2019)

Hardware implementation of RRAM based binarized neural networks

  • Peng Huang,
  • Zheng Zhou,
  • Yizhou Zhang,
  • Yachen Xiang,
  • Runze Han,
  • Lifeng Liu,
  • Xiaoyan Liu,
  • Jinfeng Kang

DOI
https://doi.org/10.1063/1.5116863
Journal volume & issue
Vol. 7, no. 8
pp. 081105 – 081105-6

Abstract

Read online

Resistive switching random access memory (RRAM) has been explored to accelerate the computation of neural networks. RRAM with linear conductance modulation is usually required for the efficient weight updating during the online training according to the back-propagation algorithm. However, most RRAM devices usually show the nonlinear characteristic. Here, to overcome the dilemma, we designed a novel weight updating principle for binarized neural networks, which enables the nonlinear RRAM to realize the weight updating in efficiency during online training. Moreover, a vector-matrix multiplication is designed to parallel calculate the dot-products of the forward and backward propagation. 1 kb nonlinear RRAM array is fabricated to demonstrate the feasibility of the analog accumulation and the parallel vector-matrix multiplication. The results achieved in this work offer new solutions for future energy efficient neural networks.