Neuromorphic Computing and Engineering (Jan 2023)

Unsupervised and efficient learning in sparsely activated convolutional spiking neural networks enabled by voltage-dependent synaptic plasticity

  • Gaspard Goupy,
  • Alexandre Juneau-Fecteau,
  • Nikhil Garg,
  • Ismael Balafrej,
  • Fabien Alibart,
  • Luc Frechette,
  • Dominique Drouin,
  • Yann Beilliard

DOI
https://doi.org/10.1088/2634-4386/acad98
Journal volume & issue
Vol. 3, no. 1
p. 014001

Abstract

Read online

Spiking neural networks (SNNs) are gaining attention due to their energy-efficient computing ability, making them relevant for implementation on low-power neuromorphic hardware. Their biological plausibility has permitted them to benefit from unsupervised learning with bio-inspired plasticity rules, such as spike timing-dependent plasticity (STDP). However, standard STDP has some limitations that make it challenging to implement on hardware. In this paper, we propose a convolutional SNN (CSNN) integrating single-spike integrate-and-fire (SSIF) neurons and trained for the first time with voltage-dependent synaptic plasticity (VDSP), a novel unsupervised and local plasticity rule developed for the implementation of STDP on memristive-based neuromorphic hardware. We evaluated the CSNN on the TIDIGITS dataset, where, helped by our sound preprocessing pipeline, we obtained a performance better than the state of the art, with a mean accuracy of 99.43%. Moreover, the use of SSIF neurons, coupled with time-to-first-spike (TTFS) encoding, results in a sparsely activated model, as we recorded a mean of 5036 spikes per input over the 172 580 neurons of the network. This makes the proposed CSNN promising for the development of models that are extremely efficient in energy. We also demonstrate the efficiency of VDSP on the MNIST dataset, where we obtained results comparable to the state of the art, with an accuracy of 98.56%. Our adaptation of VDSP for SSIF neurons introduces a depression factor that has been very effective at reducing the number of training samples needed, and hence, training time, by a factor of two and more, with similar performance.

Keywords