Neuromorphic Computing and Engineering (Jan 2024)

Efficient sparse spiking auto-encoder for reconstruction, denoising and classification

  • Ben Walters,
  • Hamid Rahimian Kalatehbali,
  • Zhengyu Cai,
  • Roman Genov,
  • Amirali Amirsoleimani,
  • Jason Eshraghian,
  • Mostafa Rahimi Azghadi

DOI
https://doi.org/10.1088/2634-4386/ad5c97
Journal volume & issue
Vol. 4, no. 3
p. 034005

Abstract

Read online

Auto-encoders are capable of performing input reconstruction, denoising, and classification through an encoder-decoder structure. Spiking Auto-Encoders (SAEs) can utilize asynchronous sparse spikes to improve power efficiency and processing latency on neuromorphic hardware. In our work, we propose an efficient SAE trained using only Spike-Timing-Dependant Plasticity (STDP) learning. Our auto-encoder uses the Time-To-First-Spike (TTFS) encoding scheme and needs to update all synaptic weights only once per input, promoting both training and inference efficiency due to the extreme sparsity. We showcase robust reconstruction performance on the Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST datasets with significantly fewer spikes compared to state-of-the-art SAEs by 1–3 orders of magnitude. Moreover, we achieve robust noise reduction results on the MNIST dataset. When the same noisy inputs are used for classification, accuracy degradation is reduced by 30%–80% compared to prior works. It also exhibits classification accuracies comparable to previous STDP-based classifiers, while remaining competitive with other backpropagation-based spiking classifiers that require global learning through gradients and significantly more spikes for encoding and classification of MNIST/Fashion-MNIST inputs. The presented results demonstrate a promising pathway towards building efficient sparse spiking auto-encoders with local learning, making them highly suited for hardware integration.

Keywords