Journal of Low Power Electronics and Applications (Feb 2022)

DSCU: Accelerating CNN Inference in FPGAs with Dual Sizes of Compute Unit

  • Zhenshan Bao,
  • Junnan Guo,
  • Wenbo Zhang,
  • Hongbo Dang

DOI
https://doi.org/10.3390/jlpea12010011
Journal volume & issue
Vol. 12, no. 1
p. 11

Abstract

Read online

FPGA-based accelerators have shown great potential in improving the performance of CNN inference. However, the existing FPGA-based approaches suffer from a low compute unit (CU) efficiency due to their large number of redundant computations, thus leading to high levels of performance degradation. In this paper, we show that no single CU can perform best across all the convolutional layers (CONV-layers). To this end, we propose the use of dual sizes of compute unit (DSCU), an approach that aims to accelerate CNN inference in FPGAs. The key idea of DSCU is to select the best combination of CUs via dynamic programming scheduling for each CONV-layer and then assemble each CONV-layer combination into a computing solution for the given CNN to deploy in FPGAs. The experimental results show that DSCU can achieve a performance density of 3.36 × 10−3 GOPs/slice on a Xilinx Zynq ZU3EG, which is 4.29 times higher than that achieved by other approaches.

Keywords