IEEE Access (Jan 2024)
Dynamic Sizing of Cloud-Native Telco Data Centers With Digital Twin and Reinforcement Learning
Abstract
Telco edge data centers (DCs) accommodate applications whose load fluctuates considerably during a day. This variability mandates swift and responsive resource adjustments to mitigate the risk of incurring unnecessary costs during off-peak periods, where a significant fraction of nodes may be under-utilized. Tackling this challenge is integral to optimizing operational efficiency and cost-effectiveness of telco edge DCs. To this end, this article aims to address the Dynamic Data Center Sizing (DDS) problem, which boils down to optimizing the number of active nodes as per current resource demand. The proposed DDS solution consists of two core modules, namely a forecasting module, which predicts resource demands, and a decision-making module, which acts upon predicted demands. The decision-making of DDS is implemented via the filtering and the rank-drain-observe (RDO) algorithms. Filtering is based on integer linear programming, and it computes the theoretically optimal state of the DC based on predicted resource demands. RDO is a heuristic that strives to realize the optimal DC state in an iterative and robust fashion. To expedite RDO in large-scale clusters, we further devise a Reinforcement Learning (RL)-enabled DDS variant (i.e., RL-DDS), in which RDO integrates an RL agent that computes optimized batches of nodes for concurrent deactivation. We propose an innovative solution based on the notion of Digital Twin to train the RL agent in emulation mode. DDS and RL-DDS are evaluated upon real-life testbeds resembling actual DCs. Results demonstrate significant cost reduction, ranging from 7% in conservative and relatively static scenarios, up to 38% in highly dynamic settings.
Keywords