Results in Engineering (Sep 2024)
Analyzing performance: Error-efficient, low-power recursive inexact multipliers for CNN applications
Abstract
In present scenario, significant data manipulations can be espied in image processing and Convolutional Neural Networks (CNNs) applications. The quality performance of these applications relies ponderously on arithmetic operations. In this context, Approximate Computing frequently finds its application, where a certain margin of error is deemed acceptable, on prioritizing circuit speed over absolute precision and the common focus lies on optimizing the binary multiplier which is a crucial yet power-intensive component, so as to streamline the complexity of above mentioned applications. Furthermore, Inexact Multipliers (IMs) are developed by using recursive method, thereby purveying enhanced design and error metrics. This paper introduces higher order Recursive-Based Inexact Multipliers (RBIMs), which integrates the suggested 4-bit IMs, and exact multipliers along with the recursive method, aiming to provide a delicate balance between design performance metrics and error tolerance. For evaluating the proposed approach, most of the existing and recently suggested IMs operating at 8-bit and 16-bit, are designed in Verilog, coded in Python, synthesized and simulated using Cadence Register Transfer Language (RTL) Compiler, Google Colabs. Eventually, the simulation results bespeak a notable improvement in power consumption, with the 8-bit and 16-bit RBIMs by outperforming the previous designs up to 36.9 %, and 35.5 % respectively. Further, the efficacies of the RBIMs are validated through its usage in image processing and CNN applications.