IEEE Access (Jan 2022)

Power Prediction in Register Files Using Machine Learning

  • Mohammed Elnawawy,
  • Assim Sagahyroon,
  • Michel Pasquier

DOI
https://doi.org/10.1109/ACCESS.2022.3172287
Journal volume & issue
Vol. 10
pp. 48358 – 48366

Abstract

Read online

The advent of computer architecture and processor design in recent years has brought about the need to design larger register files that can hold more instructions and operands to support faster processors. This encouraged designers to design wider and deeper register files with multiple read and write ports to increase their throughput. Nevertheless, larger register files consume higher energy/access, leak more power, and occupy larger areas on the chip. This portrays a significant issue in the field of chip design due to the limited energy resources in mobile devices that dominate today’s market. Therefore, it becomes crucial for chip designers to devise new mechanisms that help them study the effect of increasing the register file capabilities on those characteristics at an early stage during the design process. Artificial Neural Network (ANN) techniques, and with a reasonable degree of success have been used to predict the energy characteristics of a register file based on three parameters: the number of words in the file (D), the number of bits in one word (W) and the total number of Read and Write Ports (P). In this work, and using the same attributes, we attempt to predict the values of energy/access, leakage power, and occupied silicon area in register files using several machine learning algorithms to assess design alternatives and their energy and area tradeoffs. We compare our best algorithm to the ANN-based model reported in the literature using the same dataset. Support Vector Machine (SVM) models were able to achieve a correlation coefficient of 0.991, 0.991, and 0.989 when predicting energy/access, leakage power, and silicon area, respectively. On the other hand, the designed artificial neural network (ANN) achieves correlation coefficients of 0.974, 0.982, and 0.987, while the closest algorithms in performance to SVM achieve 0.917, 0.980, and 0.987, respectively. The results of the conducted experiments prove that SVM produce superior results when compared to ANN and other algorithms while maintaining a reasonable model training time and consuming lesser computational resources in most cases.

Keywords