IEEE Journal on Exploratory Solid-State Computational Devices and Circuits (Jan 2022)

An Energy Efficient Time-Multiplexing Computing-in-Memory Architecture for Edge Intelligence

  • Rui Xiao,
  • Wenyu Jiang,
  • Piew Yoong Chee

DOI
https://doi.org/10.1109/JXCDC.2022.3206879
Journal volume & issue
Vol. 8, no. 2
pp. 111 – 118

Abstract

Read online

The growing data volume and complexity of deep neural networks (DNNs) require new architectures to surpass the limitation of the von-Neumann bottleneck, with computing-in-memory (CIM) as a promising direction for implementing energy-efficient neural networks. However, CIM’s peripheral sensing circuits are usually power- and area-hungry components. We propose a time-multiplexing CIM architecture (TM-CIM) based on memristive analog computing to share the peripheral circuits and process one column at a time. The memristor array is arranged in a column-wise manner that avoids wasting power/energy on unselected columns. In addition, digital-to-analog converter (DAC) power and energy efficiency, which turns out to be an even greater overhead than analog-to-digital converter (ADC), can be fine-tuned in TM-CIM for significant improvement. For a 256*256 crossbar array with a typical setting, TM-CIM saves $18.4\times $ in energy with 0.136 pJ/MAC efficiency, and $19.9\times $ area for 1T1R case and $15.9\times $ for 2T2R case. Performance estimation on VGG-16 indicates that TM-CIM can save over $16\times $ area. A tradeoff between the chip area, peak power, and latency is also presented, with a proposed scheme to further reduce the latency on VGG-16, without significantly increasing chip area and peak power.

Keywords