Scientific Reports (Aug 2024)

Machine-learning-aided method for optimizing beam selection and update period in 5G networks and beyond

  • Ludwing Marenco,
  • Luiz E. Hupalo,
  • Naylson F. Andrade,
  • Felipe A. P. de Figueiredo

DOI
https://doi.org/10.1038/s41598-024-70651-9
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 15

Abstract

Read online

Abstract Finding the optimal beam pair and update time in 5G systems operating at mmWave frequencies is time-intensive and resource-demanding. This intricate procedure calls for the proposal of more intelligent approaches. Therefore, this work proposes a machine learning-based method for optimizing beam pair selection and its update time. The method is structured around three main modules: spatial characterization of beam pair service areas, training of a machine learning model using collected beam pair data, and an algorithm that uses the decision function of the trained model to compute the optimal update time for beam pairs based on the spatial position and velocity of user equipment. When the machine learning model is deployed in a network with a single gNB equipped with a $$8\times 8$$ 8 × 8 UPA and one UE equipped with a $$1\times 2$$ 1 × 2 UPA in an mmWave scenario simulated in NS3, improvements in SINR and throughput up to $$407\%$$ 407 % , were observed. Improvements are gathered because of a reduction of $$85.7\%$$ 85.7 % in beam pair selections because of an increase of approximately $$1543\%$$ 1543 % in the effective time between successive beam pair searches. This method could offer real-time optimization of the beam pair procedures in 5G networks and beyond.