IEEE Access (Jan 2024)

Enhancement of Convolutional Neural Network Hardware Accelerators Efficiency Using Sparsity Optimization Framework

  • Hemalatha Kurapati,
  • Sakthivel Ramachandran

DOI
https://doi.org/10.1109/ACCESS.2024.3416062
Journal volume & issue
Vol. 12
pp. 86034 – 86042

Abstract

Read online

Convolutional neural networks (CNNs) accelerators have been utilized widely for several digital applications to improve processing efficiency. However, the traditional CNN accelerator processor performance is insufficient to run the digital smart application as per the user’s needs, resulting in high power consumption, delay, Look Up Table (LUT)- Random Access Memory (RAM) usage, and less accuracy and throughput. Hence, the present research study was intended to design the modified CNN accelerator for prediction and data broadcasting applications. Hence, the newly designed accelerator is named a novel Siberian Tiger-based Convolutional Neural Accelerator architecture (STbCNA). Here, the sparse features and the tiger fitness data reuse strategy have been considered to gain the exact prediction outcome. The predicted outcome is transferred to the user to make the rainfall aware to satisfy this parameter. Consequently, the Throughput and other FPGA parameters were calculated and compared with other models. For comparison, all the traditional approaches were executed in the same proposed platform, and the outcomes were compared with those of the proposed approach. In that, the modified CNN (STbCNA) scored the finest outcome by attaining a high Throughput of 150 bps, reduced Power of 0.43W, high accuracy of 92.8%, less delay of 0.5ns, and LUT of 0.001. The presence of the Siberian tiger provided the continuous optimal conditions for the present FPGA implementation. Hence, the STbCNA is significant for the FPGA application in gaining the optimal outcome.

Keywords