IEEE Access (Jan 2024)
StrikeNet: Deep Convolutional LSTM-Based Road Lane Reconstruction With Spatiotemporal Inference for Lane Keeping Control
Abstract
This paper presents a Spatio-Temporal Road Inference for a KEeping NETwork (StrikeNet), aimed at enhancing Road Lane Reconstruction (RLR) and lateral motion control in Autonomous Vehicles (AV) using deep neural networks. Accurate road lane model coefficients are essential for an effective Lane Keeping System (LKS), but the traditional vision system often fails in situations where lane markers are absent or faint and cannot be properly recognized. To overcome this, a driving dataset was restructured, combining road information from a vision system and forward images for spatial training of RLR. Sequential spatial learning outputs were then processed with in-vehicle sensor data for temporal inference via Long Short-Term Memory (LSTM). The StrikeNet was rigorously tested in both typical and uncertain driving environments. Comprehensive statistical and visualization analyses were conducted to evaluate the performance of various RLR methods and lateral motion control strategies. Remarkably, the RLR demonstrated its capability to derive reliable road coefficients even in the absence of lane markers. Upon performance comparison with four alternative techniques, our method yields the lowest error and variance between human steering inputs and the control input. Specifically, under high and low lane quality conditions, the proposed method maximally reduced the control input error by up to 72% and 66%, respectively, and decreased the variance by 54% and 94%, respectively. The findings highlight StrikeNet’s effectiveness in bolstering the fail-operational performance, and reliability of lane-keeping or lane departure warning systems in autonomous driving, thereby enhancing control continuity and mitigating path deviation-induced traffic accidents.
Keywords