IEEE Open Journal of Vehicular Technology (Jan 2024)

Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks

  • Olivia Nakayima,
  • Mostafa I. Soliman,
  • Kazunori Ueda,
  • Samir A. Elsagheer Mohamed

DOI
https://doi.org/10.1109/OJVT.2024.3396637
Journal volume & issue
Vol. 5
pp. 721 – 736

Abstract

Read online

Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.

Keywords