IEEE Access (Jan 2019)

Reinforcement Learning for Service Function Chain Reconfiguration in NFV-SDN Metro-Core Optical Networks

  • Sebastian Troia,
  • Rodolfo Alvizu,
  • Guido Maier

DOI
https://doi.org/10.1109/ACCESS.2019.2953498
Journal volume & issue
Vol. 7
pp. 167944 – 167957

Abstract

Read online

With the advent of 5G technology, we are witnessing the development of increasingly bandwidth-hungry network applications, such as enhanced mobile broadband, massive machine-type communications and ultra-reliable low-latency communications. Software Defined Networking (SDN), Network Function Virtualization (NFV) and Network Slicing (NS) are gaining momentum not only in research but also in IT industry representing the drivers of 5G. NS is an approach to network operations allowing the partition of a physical topology into multiple independent virtual networks, called network slices (or slices). Within a single slice, a set of Service Function Chains (SFCs) is defined and the network resources, e.g. bandwidth, can be provisioned dynamically on demand according to specific Quality of Service (QoS) and Service Level Agreement (SLA) requirements. Traditional schemes for network resources provisioning based on static policies may lead to poor resource utilization and suffer from scalability issues. In this article, we investigate the application of Reinforcement Learning (RL) for performing dynamic SFC resources allocation in NFV-SDN enabled metro-core optical networks. RL allows to build a self-learning system able to solve highly complex problems by employing RL agents to learn policies from an evolving network environment. In particular, we build an RL system able to optimize the resources allocation of SFCs in a multi-layer network (packet over flexi-grid optical layer). The RL agent decides if and when to reconfigure the SFCs, given state of the network and historical traffic traces. Numerical simulations show significant advantages of our RL-based optimization over rule-based optimization design.

Keywords