Advanced Science (Oct 2023)

Device‐Algorithm Co‐Optimization for an On‐Chip Trainable Capacitor‐Based Synaptic Device with IGZO TFT and Retention‐Centric Tiki‐Taka Algorithm

  • Jongun Won,
  • Jaehyeon Kang,
  • Sangjun Hong,
  • Narae Han,
  • Minseung Kang,
  • Yeaji Park,
  • Youngchae Roh,
  • Hyeong Jun Seo,
  • Changhoon Joe,
  • Ung Cho,
  • Minil Kang,
  • Minseong Um,
  • Kwang‐Hee Lee,
  • Jee‐Eun Yang,
  • Moonil Jung,
  • Hyung‐Min Lee,
  • Saeroonter Oh,
  • Sangwook Kim,
  • Sangbum Kim

DOI
https://doi.org/10.1002/advs.202303018
Journal volume & issue
Vol. 10, no. 29
pp. n/a – n/a

Abstract

Read online

Abstract Analog in‐memory computing synaptic devices are widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on‐chip training due to the lack of means to control the amount of resistance change and large device variations. To overcome these shortcomings, silicon complementary metal‐oxide semiconductor (Si‐CMOS) and capacitor‐based charge storage synapses are proposed, but it is difficult to obtain sufficient retention time due to Si‐CMOS leakage currents, resulting in a deterioration of training accuracy. Here, a novel 6T1C synaptic device using only n‐type indium gaIlium zinc oxide thin film transistor (IGZO TFT) with low leakage current and a capacitor is proposed, allowing not only linear and symmetric weight update but also sufficient retention time and parallel on‐chip training operations. In addition, an efficient and realistic training algorithm to compensate for any remaining device non‐idealities such as drifting references and long‐term retention loss is proposed, demonstrating the importance of device‐algorithm co‐optimization.

Keywords