IEEE Access (Jan 2024)
Optimizing SDN Controller Load Balancing Using Online Reinforcement Learning
Abstract
In distributed Software-defined networking (SDN), control plane functions are partitioned across multiple controller instances to enhance fault tolerance and scalability. However, the dynamic nature of network traffic and rapid network events, such as link failures and controller node failures, can lead to uneven workload distribution among controller nodes. This research aims to adjust switch-to-controller mapping to address load imbalance dynamically. We model flow arrivals at switches and subsequent actions within a Markov decision process (MDP) framework. In MDP, precise knowledge of the arrival rate is required, however, such an assumption is impractical in dynamic environments. Reinforcement learning (RL) learns policies from environment interactions, enabling autonomous decision-making in complex domains by adeptly navigating uncertainties. The proposed scheme uses RL to monitor SDN flow dynamics and maintain system load balance through switch migration. Herein, the proposed scheme generates migration triplets specifying the source controller, the destination controller for migration, and the switch to be migrated. The scheme considers the cost of migrating the flows in terms of the flow arrival rate and hop count between the switch and the controllers. Experimental results confirm that the framework effectively achieves load balancing across different network topologies and diverse traffic load distributions on switches.
Keywords