Applied Sciences (Nov 2024)

Systematic Analysis of Low-Precision Training in Deep Neural Networks: Factors Influencing Matrix Computations

  • Ao Shen,
  • Zhiquan Lai,
  • Lizhi Zhang

DOI
https://doi.org/10.3390/app142110025
Journal volume & issue
Vol. 14, no. 21
p. 10025

Abstract

Read online

As Deep Neural Networks (DNNs) continue to increase in complexity, the computational demands of their training have become a significant bottleneck. Low-precision training has emerged as a crucial strategy, wherein full-precision values are quantized to lower precisions, reducing computational overhead while aiming to maintain model accuracy. While prior research has primarily focused on minimizing quantization noise and optimizing performance for specific models and tasks, a comprehensive understanding of the general principles governing low-precision computations across diverse DNN architectures has been lacking. In this paper, we address this gap by systematically analyzing the factors that influence low-precision matrix computations, which are fundamental to DNN training. We investigate three critical factors—accumulation in matrix calculations, the frequency of element usage, and the depth of matrices within the model—and their impact on low-precision training. Through controlled experiments on standard models, as well as customized experiments designed to isolate individual factors, we derive several key insights: layers with higher accumulation and matrices with lower usage frequencies demonstrate greater tolerance to low-precision noise, without significantly compromising the stability of model training. Additionally, while the depth of matrices influences the stability of matrix operations to some extent, it does not have a noticeable effect on the overall training outcomes. Our findings contribute to the development of generalizable principles for low-precision training, offering a systematic framework applicable across various DNN architectures. We provide empirical evidence supporting the strategic allocation of training bit-widths based on the analyzed factors, thereby enhancing the efficiency and effectiveness of DNN training in resource-constrained environments.

Keywords