Frontiers in Neuroscience (Jan 2024)

Chip-In-Loop SNN Proxy Learning: a new method for efficient training of spiking neural networks

  • Yuhang Liu,
  • Tingyu Liu,
  • Yalun Hu,
  • Wei Liao,
  • Yannan Xing,
  • Sadique Sheik,
  • Sadique Sheik,
  • Ning Qiao,
  • Ning Qiao

DOI
https://doi.org/10.3389/fnins.2023.1323121
Journal volume & issue
Vol. 17

Abstract

Read online

The primary approaches used to train spiking neural networks (SNNs) involve either training artificial neural networks (ANNs) first and then transforming them into SNNs, or directly training SNNs using surrogate gradient techniques. Nevertheless, both of these methods encounter a shared challenge: they rely on frame-based methodologies, where asynchronous events are gathered into synchronous frames for computation. This strays from the authentic asynchronous, event-driven nature of SNNs, resulting in notable performance degradation when deploying the trained models on SNN simulators or hardware chips for real-time asynchronous computation. To eliminate this performance degradation, we propose a hardware-based SNN proxy learning method that is called Chip-In-Loop SNN Proxy Learning (CIL-SPL). This approach effectively eliminates the performance degradation caused by the mismatch between synchronous and asynchronous computations. To demonstrate the effectiveness of our method, we trained models using public datasets such as N-MNIST and tested them on the SNN simulator or hardware chip, comparing our results to those classical training methods.

Keywords