Advanced Intelligent Systems (Jul 2020)

Parallel Operation of Self‐Limited Analog Programming for Fast Array‐Level Weight Programming and Update

  • Hanchan Song,
  • Jangho An,
  • Seoil Son,
  • Young Seok Kim,
  • Juseong Park,
  • Jae Bum Jeon,
  • Geunyoung Kim,
  • Kyung Min Kim

DOI
https://doi.org/10.1002/aisy.202000014
Journal volume & issue
Vol. 2, no. 7
pp. n/a – n/a

Abstract

Read online

Memristive neural networks perform vector matrix multiplication efficiently, which is used for the accelerator of neuromorphic computing. To train the memristor cells in a memristive neural network, the analog conductance state of the memristor should be programmed in parallel; otherwise, the resulting long training time can limit the size of the neural network. Herein, a novel parallel programming method using the self‐limited analog switching behavior of the memristor is proposed. A Pt/Ti:NbOx/NbOx/TiN charge trap memristor device for the programming demonstration is utilized, and a convolutional neural network is emulated to train the MNIST dataset, based on the device characteristics. In the simulation, the proposed programming method is able to reduce programming time to as low as 1/130, compared with the sequential programming method. The simulation suggests that the programming time required by the proposed method is not affected by array size, which makes it very promising in a high‐density neural network.

Keywords