IEEE Access (Jan 2024)

Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA

  • Ali Mehrabi,
  • Yeshwanth Bethi,
  • Andre van Schaik,
  • Andrew Wabnitz,
  • Saeed Afshar

DOI
https://doi.org/10.1109/ACCESS.2024.3500134
Journal volume & issue
Vol. 12
pp. 170980 – 170993

Abstract

Read online

This paper presents an efficient hardware implementation of the recently proposed Optimised Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thresholds in an efficient hierarchical structure. This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware. The implementation consists of a multi-layer Spiking Neural Network (SNN) and individual training modules for each layer that enable online self-learning without using back-propagation. By using simple local adaptive selection thresholds, a Winner-Take-All (WTA) constraint on each layer, and a modified weight update rule that is more amenable to hardware, the trainer module allocates neuronal resources optimally at each layer without having to pass high-precision error measurements across layers. All elements in the system, including the training module, interact using event-based binary spikes. The hardware-optimised implementation is shown to preserve the performance of the original algorithm across multiple spatial-temporal classification problems with significantly reduced hardware requirements.

Keywords