Applied Sciences (Apr 2022)

Performance Analysis of Selected Machine Learning Techniques for Estimating Resource Requirements of Virtual Network Functions (VNFs) in Software Defined Networks

  • Sahibzada Muhammad Faheem,
  • Mohammad Inayatullah Babar,
  • Ruhul Amin Khalil,
  • Nagham Saeed

DOI
https://doi.org/10.3390/app12094576
Journal volume & issue
Vol. 12, no. 9
p. 4576

Abstract

Read online

Rapid development in the field of computer networking is now demanding the application of Machine Learning (ML) techniques in the traditional settings to improve the efficiency and bring automation to these networks. The application of ML to existing networks brings a lot of challenges and use-cases. In this context, we investigate different ML techniques to estimate resource requirements of complex network entities such as Virtual Network Functions (VNFs) deployed in Software Defined Networks (SDN) environment. In particular, we focus on the resource requirements of the VNFs in terms of Central Processing Unit (CPU) consumption, when input traffic represented by features is processed by them. We propose supervised ML models, Multiple Linear Regression (MLR) and Support Vector Regression (SVR), which are compared and analyzed against state of the art and use Fitting Neural Networks (FNN), to answer the resource requirement problem for VNF. Our experiments demonstrated that the behavior of different VNFs can be learned in order to model their resource requirements. Finally, these models are compared and analyzed, in terms of the regression accuracy and Cumulative Distribution Function (CDF) of the percentage prediction error. In all the investigated cases, the ML models achieved a good prediction accuracy with the total error less than 10% for FNN, while the total error was less than 9% and 4% for MLR and SVR, respectively, which shows the effectiveness of ML in solving such problems. Furthermore, the results shows that SVR outperform MLR and FNN in almost all the considered scenarios, while MLR is marginally more accurate than FNN.

Keywords