IEEE Access (Jan 2021)

Accelerating Spike-by-Spike Neural Networks on FPGA With Hybrid Custom Floating-Point and Logarithmic Dot-Product Approximation

  • Yarib Nevarez,
  • David Rotermund,
  • Klaus R. Pawelzik,
  • Alberto Garcia-Ortiz

DOI
https://doi.org/10.1109/ACCESS.2021.3085216
Journal volume & issue
Vol. 9
pp. 80603 – 80620

Abstract

Read online

Spiking neural networks (SNNs) represent a promising alternative to conventional neural networks. In particular, the so-called Spike-by-Spike (SbS) neural networks provide exceptional noise robustness and reduced complexity. However, deep SbS networks require a memory footprint and a computational cost unsuitable for embedded applications. To address this problem, this work exploits the intrinsic error resilience of neural networks to improve performance and to reduce hardware complexity. More precisely, we design a vector dot-product hardware unit based on approximate computing with configurable quality using hybrid custom floating-point and logarithmic number representation. This approach reduces computational latency, memory footprint, and power dissipation while preserving inference accuracy. To demonstrate our approach, we address a design exploration flow using high-level synthesis and a Xilinx SoC-FPGA. The proposed design reduces $20.5\times $ computational latency and $8\times $ weight memory footprint, with less than 0.5% of accuracy degradation on a handwritten digit recognition task.

Keywords