Advanced Intelligent Systems (Aug 2022)

Pattern Training, Inference, and Regeneration Demonstration Using On‐Chip Trainable Neuromorphic Chips for Spiking Restricted Boltzmann Machine

  • Uicheol Shin,
  • Masatoshi Ishii,
  • Atsuya Okazaki,
  • Megumi Ito,
  • Malte J. Rasch,
  • Wanki Kim,
  • Akiyo Nomura,
  • Wonseok Choi,
  • Dooyong Koh,
  • Kohji Hosokawa,
  • Matthew BrightSky,
  • Seiji Munetoh,
  • SangBum Kim

DOI
https://doi.org/10.1002/aisy.202200034
Journal volume & issue
Vol. 4, no. 8
pp. n/a – n/a

Abstract

Read online

A fully silicon‐integrated restricted Boltzmann machine (RBM) with an event‐driven contrastive divergence (eCD) training algorithm is implemented using novel stochastic leaky integrate‐and‐fire (LIF) neuron circuits and six‐transistor/2‐PCM‐resistor (6T2R) synaptic unit cells on 90 nm CMOS technology. To elaborate, designed a bidirectional, asynchronous, and parallel pulse‐signaling scheme over an analog‐weighted phase‐change memory (PCM) synapse array to enable spike‐timing‐dependent plasticity (STDP) as a local weight update rule based on eCD is designed. Building upon the initial version of this work, significantly more experimental details are added, such as the on‐chip characterization results of LIF and backward‐LIF (BLIF) and stochasticity of our random walk circuitry. The experimental characterization of these on‐chip stochastic neuron circuits shows a reasonable symmetricity between LIF and BLIF as well as the necessary stochasticity for spiking RBM operation. Fully hardware‐based image classification recorded 93% on‐chip training accuracy from 100 handwritten MNIST digit images. In addition, we experimentally demonstrated the generative characteristics of the RBM by reconstructing partial patterns on hardware. As each synapse and neuron execute its computations in an asynchronous and fully parallel fashion, the chip can perform data‐intensive machine learning (ML) tasks in a power‐efficient manner and take advantage of the sparseness of spiking.

Keywords