IEEE Access (Jan 2020)
Area- and Energy-Efficient STDP Learning Algorithm for Spiking Neural Network SoC
Abstract
Recently, spiking neural networks have gained attention owing to their energy efficiency. All-to-all spike-time dependent plasticity is a popular learning algorithm for spiking neural networks because it is suitable for nondifferentiable spike event-based learning and requires fewer computations than back-propagation-based algorithms. However, the hardware implementation of all-to-all spike-time dependent plasticity is limited by the large storage area required for spike history and large energy consumption caused by frequent memory access. We propose a time-step scaled spike-time dependent plasticity to reduce the storage area required for spike history by reducing the area of the spike-time dependent plasticity learning circuit by 60% and a post-neuron spike-referred spike-time dependent plasticity to reduce the energy consumption by 99.1% by efficiently accessing the memory while learning. The accuracy of Modified National Institute of Standards and Technology image classification degraded by less than 2% when both time-step scaled spike-time dependent plasticity and post-neuron spike-referred spike-time dependent plasticity were applied. Thus, the proposed hardware-friendly spike-time dependent plasticity algorithms make all-to-all spike-time dependent plasticity implementable in more compact areas while reducing energy consumption and experiencing insignificant accuracy degradation.
Keywords