IEEE Journal on Exploratory Solid-State Computational Devices and Circuits (Jan 2019)
Benchmark of Ferroelectric Transistor-Based Hybrid Precision Synapse for Neural Network Accelerator
Abstract
In-memory computing with analog nonvolatile memories can accelerate the in situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T1F) that exploit the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency, and energy. The corresponding array architecture is presented and the array-level operations are illustrated. The benchmark is conducted by multilayer-perceptron (MLP) + NeuroSim framework with comparison to other capacitor-assisted (e.g., 3T1C + 2PCM) hybrid precision cell. The design tradeoffs and scalability are discussed between different implementations.
Keywords