IEEE Journal on Exploratory Solid-State Computational Devices and Circuits (Jan 2020)

A Relaxed Quantization Training Method for Hardware Limitations of Resistive Random Access Memory (ReRAM)-Based Computing-in-Memory

  • Wei-Chen Wei,
  • Chuan-Jia Jhang,
  • Yi-Ren Chen,
  • Cheng-Xin Xue,
  • Syuan-Hao Sie,
  • Jye-Luen Lee,
  • Hao-Wen Kuo,
  • Chih-Cheng Lu,
  • Meng-Fan Chang,
  • Kea-Tiong Tang

DOI
https://doi.org/10.1109/JXCDC.2020.2992306
Journal volume & issue
Vol. 6, no. 1
pp. 45 – 52

Abstract

Read online

Nonvolatile computing-in-memory (nvCIM) exhibits high potential for neuromorphic computing involving massive parallel computations and for achieving high energy efficiency. nvCIM is especially suitable for deep neural networks, which are required to perform large amounts of matrix-vector multiplications. However, a comprehensive quantization algorithm has yet to be developed, which overcomes the hardware limitations of resistive random access memory (ReRAM)-based nvCIM, such as the number of I/O, word lines (WLs), and ADC outputs. In this article, we propose a quantization training method for compressing deep models. The method comprises three steps: input and weight quantization, ReRAM convolution (ReConv), and ADC quantization. ADC quantization optimizes the error sampling problem by using the Gumbel-softmax trick. Under a 4-bit ADC of nvCIM, the accuracy only decreases by 0.05% and 1.31% for the MNIST and CIFAR-10, respectively, compared with the corresponding accuracies obtained under an ideal ADC. The experimental results indicate that the proposed method is effective for compensating the hardware limitations of nvCIM macros.

Keywords