Advanced Intelligent Systems (Aug 2022)

Mixed‐Precision Continual Learning Based on Computational Resistance Random Access Memory

  • Yi Li,
  • Woyu Zhang,
  • Xiaoxin Xu,
  • Yifan He,
  • Danian Dong,
  • Nanjia Jiang,
  • Fei Wang,
  • Zeyu Guo,
  • Shaocong Wang,
  • Chunmeng Dou,
  • Yongpan Liu,
  • Zhongrui Wang,
  • Dashan Shang

DOI
https://doi.org/10.1002/aisy.202200026
Journal volume & issue
Vol. 4, no. 8
pp. n/a – n/a

Abstract

Read online

Artificial neural networks have acquired remarkable achievements in the field of artificial intelligence. However, it suffers from catastrophic forgetting when dealing with continual learning problems, i.e., the loss of previously learned knowledge upon learning new information. Although several continual learning algorithms have been proposed, it remains a challenge to implement these algorithms efficiently on conventional digital systems due to the physical separation between memory and processing units. Herein, a software–hardware codesigned in‐memory computing paradigm is proposed, where a mixed‐precision continual learning (MPCL) model is deployed on a hybrid analogue–digital hardware system equipped with resistance random access memory chip. Software‐wise, the MPCL effectively alleviates catastrophic forgetting and circumvents the requirement for high‐precision weights. Hardware‐wise, the hybrid analogue–digital system takes advantage of the colocation of memory and processing units, greatly improving energy efficiency. By combining the MPCL with an in situ fine‐tuning method, high classification accuracies of 94.9% and 95.3% (software baseline 97.0% and 97.7%) on the 5‐split‐MNIST and 5‐split‐FashionMNIST are achieved, respectively. The proposed system reduces ≈200 times energy consumption of the multiply‐and‐accumulation operations during the inference phase compared to the conventional digital systems. This work paves the way for future autonomous systems at the edge.

Keywords