IEEE Access (Jan 2024)

Enhanced SCNN-Based Hybrid Spatial-Temporal Lane Detection Model for Intelligent Transportation Systems

  • Jingang Li,
  • Chenxu Ma,
  • Yonghua Han,
  • Haibo Mu,
  • Lurong Jiang

DOI
https://doi.org/10.1109/ACCESS.2024.3373203
Journal volume & issue
Vol. 12
pp. 40075 – 40091

Abstract

Read online

Accurate and timely lane detection is imperative for the seamless operation of autonomous driving systems. In this study, leveraging the gradual variation of lane features within a defined range of width and length, we introduce an enhanced Spatial-Temporal Recurrent Neural Network (SCNN) framework. This framework serves as the cornerstone of an innovative hybrid spatial-temporal model for lane detection, which is tailored to address the prevalent issues of substandard detection performance and insufficient real-time processing in intricate scenarios, such as those involving lane erosion and inconsistent lighting conditions, which often challenge conventional models. With the foundational understanding that lanes manifest as continuous lines, we employ a temporal sequence of lane imagery as the input to our model, thereby ensuring a rich provision of feature information. The model adopts an encoder-decoder structure and integrates a Spatial-Temporal Recurrent Neural Network module for the extraction of interrelated information from the image sequence. The model culminates in the output of the lane detection results for the terminal frame. The proposed lane detection model exhibits a commendable synthesis of accuracy and real-time efficiency, attaining an Accuracy of 97.87%, an $F_{1}$ -score of 0.943, and a FPS of 19.342 on the tvtLANE dataset and an Accuracy of 98.21%, an $F_{1}$ -score of 0.957 on the Tusimple dataset. These metrics signify a superior performance over a majority of the current lane detection methods.

Keywords