IEEE Open Journal of Vehicular Technology (Jan 2025)

Time Complexity of Training DNNs With Parallel Computing for Wireless Communications

  • Pengyu Cong,
  • Chenyang Yang,
  • Shengqian Han,
  • Shuangfeng Han,
  • Xiaoyun Wang

DOI
https://doi.org/10.1109/OJVT.2025.3526847
Journal volume & issue
Vol. 6
pp. 359 – 384

Abstract

Read online

Deep neural networks (DNNs) have been widely used for learning various wireless communication policies. While DNNs have demonstrated the ability to reduce the time complexity of inference, their training often incurs a high computational cost. Since practical wireless systems require retraining due to operating in open and dynamic environments, it is crucial to analyze the factors affecting the training complexity, which can guide the DNN architecture selection and the hyper-parameter tuning for efficient policy learning. As a metric of time complexity, the number of floating-point operations (FLOPs) for inference has been analyzed in the literature. However, the time complexity of training DNNs for learning wireless communication policies has only been evaluated in terms of runtime. In this paper, we introduce the number of serial FLOPs (se-FLOPs) as a new metric of time complexity, accounting for the ability of parallel computing. The se-FLOPs metric is consistent with actual runtime, making it suitable for measuring the time complexity of training DNNs. Since graph neural networks (GNNs) can learn a multitude of wireless communication policies efficiently and their architectures depend on specific policies, no universal GNN architecture is available for analyzing complexities across different policies. Thus, we first use precoder learning as an example to demonstrate the derivation of the numbers of se-FLOPs required to train several DNNs. Then, we compare the results with the se-FLOPs for inference of the DNNs and for executing a popular numerical algorithm, and provide the scaling laws of these complexities with respect to the numbers of antennas and users. Finally, we extend the analyses to the learning of general wireless communication policies. We use simulations to validate the analyses and compare the time complexity of each DNN trained for achieving the best learning performance and achieving an expected performance.

Keywords