Engineering Science and Technology, an International Journal (Nov 2023)
An efficient design methodology to speed up the FPGA implementation of artificial neural networks
Abstract
In this paper, we propose and formulate a C++ based training methodology for speeding up the implementation of an Artificial Neural Network (ANN) in a Field Programmable Gate Array (FPGA). The proposed ANN implementation methodology uses a custom C++ based program referred to as Neural Network Design Parameter Extraction (NNDPE) program developed using an open source library namely Fast Artificial Neural Network (FANN). The proposed C++ based NNDPE program reduces the time required to train and extract the design parameters of the ANN, such as the number of layers, the number of neurons in each layer, and the weights. The extracted ANN design parameters, custom hardware arithmetic units, and function-approximate activation functions are used to implement the ANN hardware architecture of a linear function on the Virtex-7 FPGA platform. The Vivado 2018.3 tool is used to simulate, synthesize, and implement the ANN-based linear function. It was concluded from the simulation that the ANN hardware implementation has high precision due to the usage of floating-point arithmetic operations compared to the ANN which uses fixed-point arithmetic operations. The ANN-implemented on the Virtex-7 FPGA operates at 150.76 MHz, which is approximately 11 to 18 times faster than the software implementation running on various CPU cores with operating frequencies ranging over 2.2 GHz to 4.70 GHz.