Neuromorphic Computing and Engineering (Jan 2024)

Reducing the spike rate of deep spiking neural networks based on time-encoding

  • Riccardo Fontanini,
  • Alessandro Pilotto,
  • David Esseni,
  • Mirko Loghi

DOI
https://doi.org/10.1088/2634-4386/ad64fd
Journal volume & issue
Vol. 4, no. 3
p. 034004

Abstract

Read online

A primary objective of Spiking Neural Networks is a very energy-efficient computation. To achieve this target, a small spike rate is of course very beneficial given the event-driven nature of such a computation. A network that processes information encoded in spike timing can, by its nature, have such a sparse event rate, but, as the network becomes deeper and larger, the spike rate tends to increase without any improvements in the final accuracy. If, on the other hand, a penalty on the excess of spikes is used during the training, the network may shift to a configuration where many neurons are silent, thus affecting the effectiveness of the training itself. In this paper, we present a learning strategy to keep the final spike rate under control by changing the loss function to penalize the spikes generated by neurons after the first ones. Moreover, we also propose a 2-phase training strategy to avoid silent neurons during the training, intended for benchmarks where such an issue can cause the switch off of the network.

Keywords