IEEE Access (Jan 2023)

Vision-Based Robust Lane Detection and Tracking in Challenging Conditions

  • Samia Sultana,
  • Boshir Ahmed,
  • Manoranjan Paul,
  • Muhammad Rafiqul Islam,
  • Shamim Ahmad

DOI
https://doi.org/10.1109/ACCESS.2023.3292128
Journal volume & issue
Vol. 11
pp. 67938 – 67955

Abstract

Read online

Lane marking detection is fundamental for both advanced driving assistance systems and traffic surveillance systems. However, detecting lane is highly challenging when the visibility of a road lane marking is low, obscured or often invisible due to real-life challenging environment and adverse weather. Most of the lane detection methods suffer from four types of challenges: (i) light effects i.e. shadow, glare of light, reflection etc. created by different light sources like streetlamp, tunnel-light, sun, wet road etc.; (ii) Obscured visibility of eroded, blurred, dashed, colored and cracked lane caused by natural disasters and adverse weather (rain, snow etc.); (iii) lane marking occlusion by different objects from surroundings (wiper, vehicles etc.); and (iv) presence of confusing lane like lines inside the lane view e.g., guardrails, pavement marking, road divider etc. In this paper, we proposed a simple, real-time, and robust lane detection and tracking method to detect lane marking considering the abovementioned challenging conditions. In this method, we introduced three key technologies. First, we introduce a comprehensive intensity threshold range (CITR) to improve the performance of the canny operator in detecting different types of lane edges e.g., clear, low intensity, cracked, colored, eroded, or blurred lane edges. Second, we propose a two-step lane verification technique, the angle-based geometric constraint (AGC) and length-based geometric constraint (LGC) followed by Hough Transform, to verify the characteristics of lane marking and to prevent incorrect lane detection. Finally, we propose a novel lane tracking technique, to predict the lane position of the next frame by defining a range of horizontal lane position (RHLP) along the x axis which will be updated with respect to the lane position of previous frame. It can keep track of the lane position when either left or right or both lane markings are partially and fully invisible. To evaluate the performance of the proposed method we used the DSDLDE (Lee and Moon, 2018) and SLD (Borkar et al., 2009) dataset with $1080\times 1920$ and $480\times 720$ resolutions at 24 and 25 frames/sec respectively where the video frames containing different challenging scenarios. Experimental results show that the average detection rate is 97.55%, and the average processing time is 22.33 msec/frame, which outperforms the state-of-the-art method.

Keywords