Advances in Electrical and Electronic Engineering (Jan 2014)

FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators

  • Gabriele-Maria Lozito,
  • Antonino Laudani,
  • Francesco Riganti Fulginei,
  • Alessandro Salvini

DOI
https://doi.org/10.15598/aeee.v12i1.831
Journal volume & issue
Vol. 12, no. 1
pp. 30 – 39

Abstract

Read online

This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented.

Keywords