IEEE Access (Jan 2021)

Critical Offset Optimizations for Overlapping-Based Time-Triggered Windows in Time-Sensitive Network

  • Khaled M. Shalghum,
  • Nor K. Noordin,
  • Aduwati Sali,
  • Fazirulhisyam Hashim

DOI
https://doi.org/10.1109/ACCESS.2021.3110585
Journal volume & issue
Vol. 9
pp. 130484 – 130501

Abstract

Read online

Deterministic and low latency communications are increasingly becoming essential requirements for several safety-critical applications, such as automotive and automation industries. Time-sensitive networking (TSN) is an element of the new IEEE 802.1 standards that introduced Ethernet-based amendments to support these applications. One of these enhancements was presented in IEEE 802.1Qbv to define the time-aware shaping (TAS) technique for time-triggered (TT) traffic scheduling. The TAS mechanism is window-based scheduling using a gating system controlled by the gate control list (GCL) schedules in all nodes. Although several scheduling algorithms have been proposed to investigate the effects of window-related parameters on network performance, the offset difference ( $OD$ ) between the same-class windows in the adjoining nodes has not been optimized yet. These optimizations are extremely crucial to implement less pessimistic latency schedules. This paper proposes an optimized flexible window-overlapping scheduling (OFWOS) algorithm that optimizes TT window offsets based on latency evaluations considering the overlapping between different priority windows at the same node. First, we formulate the GCL timings as mathematical forms under variable $OD\text{s}$ between the same-priority windows. Then, an analytical model is implemented using the network calculus (NC) approach to express the worst-case end-to-end delay ( $WCD$ ) for TT flows and evaluated using a realistic vehicular use case. The OFWOS model optimizes $OD$ under all overlapping situations between TT windows at the same node. Compared with the latest works of 3-hop and 30-hop TSN connections, the OFWOS reduces the $WCD$ bounds by 8.4% and 32.6%, respectively, accomplishing less pessimistic end-to-end latency bounds.

Keywords