Measurement: Sensors (Feb 2024)
An efficient frame preemption algorithm for time-sensitive networks using enhanced graph convolutional network with particle swarm optimization
Abstract
Numerous time-sensitive applications are being made possible by the Internet of Things (IoT) phenomenon, which calls for real-time transmission to provide communication services. Applications that depend on low latency have been made possible because of the 5G Ultra-Reliable and Low-Latency Communication (URLLC) scenario. Widely regarded as a possible paradigm for achieving the deterministic transmission guarantees for 5G, Time-Sensitive Networking (TSN) has gained much attention. TSN, on the other hand, is a hybrid traffic system that combines best-effort and time-sensitive traffic, necessitating efficient routing and frame preemption to provide a predictable and limited latency. On the other hand, time-sensitive and non-time-sensitive traffic optimization significantly broaden the issue area and increase the difficulty of establishing a solution. This research presents Enhanced Graph Convolutional Network-based Deep Reinforcement Learning (EGCN-based DRL) solutions for the joint optimization problem in real-world communication contexts. The EGCN is included in deep reinforcement learning to obtain the network's spatial dependency and enhance the generalization performance of the suggested technique (DRL). Specifically, the EGCN approximates the graph convolution kernel using the first-order Chebyshev polynomial, which reduces the complexity of the algorithm and improves the feasibility of the task. Particle Swarm Optimization (PSO) is also employed to hasten the convergence of the model training process. The objective of TSN is to provide deterministic and time-critical communication capabilities over Ethernet networks. TSN aims to enable reliable, low-latency, and synchronized communication between devices, ensuring that time-sensitive data and control messages are delivered with high precision and determinism. The suggested EGCN-based DRL algorithm surpasses cutting-edge techniques in terms of average end-to-end latency, and it also converges quickly, according to numerical simulations.