IEEE Access (Jan 2023)

Deep Multi-Agent Reinforcement Learning With Minimal Cross-Agent Communication for SFC Partitioning

  • Angelos Pentelas,
  • Danny De Vleeschauwer,
  • Chia-Yu Chang,
  • Koen De Schepper,
  • Panagiotis Papadimitriou

DOI
https://doi.org/10.1109/ACCESS.2023.3269576
Journal volume & issue
Vol. 11
pp. 40384 – 40398

Abstract

Read online

Network Function Virtualization (NFV) decouples network functions from the underlying specialized devices, enabling network processing with higher flexibility and resource efficiency. This promotes the use of virtual network functions (VNFs), which can be grouped to form a service function chain (SFC). A critical challenge in NFV is SFC partitioning (SFCP), which is mathematically expressed as a graph-to-graph mapping problem. Given its NP-hardness, SFCP is commonly solved by approximation methods. Yet, the relevant literature exhibits a gradual shift towards data-driven SFCP frameworks, such as (deep) reinforcement learning (RL). In this article, we initially identify crucial limitations of existing RL-based SFCP approaches. In particular, we argue that most of them stem from the centralized implementation of RL schemes. Therefore, we devise a cooperative deep multi-agent reinforcement learning (DMARL) scheme for decentralized SFCP, which fosters the efficient communication of neighboring agents. Our simulation results (i) demonstrate that DMARL outperforms a state-of-the-art centralized double deep $Q$ -learning algorithm, (ii) unfold the fundamental behaviors learned by the team of agents, (iii) highlight the importance of information exchange between agents, and (iv) showcase the implications stemming from various network topologies on the DMARL efficiency.

Keywords