Jisuanji kexue yu tansuo (Nov 2021)

Review on FPGA-Based Accelerators in Deep Learning

  • LIU Tengda1, ZHU Junwen1, ZHANG Yiwen2+

DOI
https://doi.org/10.3778/j.issn.1673-9418.2104012
Journal volume & issue
Vol. 15, no. 11
pp. 2093 – 2104

Abstract

Read online

For the past few years, with rapid development of Internet and big data, artificial intelligence has become popular, and it is the rise of deep learning that promotes the rapid development of AI. The problem that needs to be solved urgently in the era of big data is how to effectively analyze and use extremely complex and diverse data, and then make full use of the value of data and benefit mankind. As a technology of machine learning, deep learning which has been widely used in speech recognition, image recognition, natural language processing and many other fields is an important magic weapon to solve this problem. It plays an increasingly important role in data processing and changes traditional machine learning methods. How to effectively accelerate the computing power of deep learning has always been the focus of scientific research. With strong parallel computing power and low power consumption, FPGA has become a strong competitor of GPU in the field of deep learning acceleration. Starting from the typical models of deep learning, on the basis of the existing characteristics of FPGA acceleration technology, the research status of various accelerators is summarized from four aspects: accelerators for neural network models, accelerators for a specific application, accelerators for optimization strategies, and general accelerator frameworks with hardware templates. Then, the performance of different acceleration technologies in different models is compared. Finally, the possible development direction in the future is prospected.

Keywords