IEEE Access (Jan 2021)

Hardware-Based Spiking Neural Network Using a TFT-Type AND Flash Memory Array Architecture Based on Direct Feedback Alignment

  • Won-Mook Kang,
  • Dongseok Kwon,
  • Sung Yun Woo,
  • Soochang Lee,
  • Honam Yoo,
  • Jangsaeng Kim,
  • Byung-Gook Park,
  • Jong-Ho Lee

DOI
https://doi.org/10.1109/ACCESS.2021.3080310
Journal volume & issue
Vol. 9
pp. 73121 – 73132

Abstract

Read online

A hardware-based neural network that enables on-chip training using a thin-film transistor-type AND flash memory array architecture is designed. The synaptic device constituting the array is characterized by a doped $p$ -type body, a gate insulator stack composed of SiO2/Si3N4/Al2O3, and a partially curved poly-Si channel. The body reduces the circuit burden on the high voltage driver required for both the source and drain lines when changing the synaptic weights. The high- $\kappa $ material included in the gate insulator stack helps to lower the operating voltage of the device. As the device scales down, the structural characteristics of the device have the potential to increase the efficiency of the memory operation and the immunity to the voltage drop effect that occurs in the bit-lines of the array. In an AND array architecture using fabricated synaptic devices, a pulse scheme for selective memory operation is proposed and verified experimentally. Due to the direct feedback alignment (DFA) algorithm, which does not need to have the same synaptic weight in the forward path and backward path, the AND array architecture can be utilized in designing an efficient on-chip training neural network. Pulse schemes suitable for the proposed AND array architecture are also devised to implement the DFA algorithm in neural networks. In a system-level simulation, a recognition accuracy of up to 97.01% is obtained in the MNIST pattern learning task based on the proposed pulse scheme and computing architecture.

Keywords