Advanced Intelligent Systems (Dec 2022)

Floating Gate Transistor‐Based Accurate Digital In‐Memory Computing for Deep Neural Networks

  • Runze Han,
  • Peng Huang,
  • Yachen Xiang,
  • Hong Hu,
  • Sheng Lin,
  • Peiyan Dong,
  • Wensheng Shen,
  • Yanzhi Wang,
  • Xiaoyan Liu,
  • Jinfeng Kang

DOI
https://doi.org/10.1002/aisy.202200127
Journal volume & issue
Vol. 4, no. 12
pp. n/a – n/a

Abstract

Read online

To improve the computing speed and energy efficiency of deep neural network (DNN) applications, in‐memory computing with nonvolatile memory (NVM) is proposed to address the time‐consuming and energy‐hungry data shuttling issue. Herein, a digital in‐memory computing method for convolution computing, which holds the key to DNNs, is proposed. Based on the proposed method, a floating gate transistor‐based in‐memory computing chip for accurate convolution computing with high parallelism is created. The proposed digital in‐memory computing method can achieve the central processing unit (CPU)‐equivalent precision with the same neural network architecture and parameters, different from the analogue or digital–analogue‐mixed in‐memory computing techniques. Based on the fabricated floating gate transistor‐based in‐memory computing chip, a hardware LeNet‐5 neural network is built. The chip achieves 96.25% accuracy on the full Modified National Institute of Standards and Technology database, which is the same as the result computed by the CPU with the same neural network architecture and parameters.

Keywords