IEEE Access (Jan 2016)
Energy Efficient Iris Recognition With Graphics Processing Units
Abstract
For the past 40 years, Moore's law has predicted the rapid growth of the computer industry. In the past few years, however, this growth has slowed for central processing units (CPUs). Instead, there has been a shift to multicore computing, specifically with the general purpose graphic processing units (GPUs). Conventional CPUs have between two and eight cores, but the GPUs can have hundreds, even thousands of cores. By parallelizing code, the computing power of these cores can be utilized to greatly accelerate the performance of certain algorithms. The GPUs, however, have been known to consume more power than the conventional processing units. While previous research has focused on the impact that the GPUs have on performance, there are much fewer studies on the impact of GPUs on energy consumption and efficiency. Some researchers have hypothesized that if the performance of an algorithm was sufficiently increased on a GPU, then the accelerated time would actually cause the GPU to consume less energy. For the first time to our knowledge, we study the energy efficiency of a GPU with an application to iris recognition. Using GPU-based code written in the C++ compatible compute unified device architecture language, energy consumption tests are performed on basic image processing techniques, including image inversion, thresholding, dilation, erosion, and memory/computationally intensive calculations, such as the template matching. We demonstrate that the portions of these algorithms implemented on the GPU reduce energy consumption by as much as 272 times.
Keywords