CAAI Transactions on Intelligence Technology (Mar 2019)

Lane detection algorithm based on temporal–spatial information matching and fusion

  • Jun Wang,
  • Bin Kong,
  • Tao Mei,
  • Hu Wei

DOI
https://doi.org/10.1049/trit.2017.0022

Abstract

Read online

Given the problems of less single-image information and poor anti-interference ability, a lane detection algorithm based on temporal-spatial information matching and fusion by reverse perspective transformation is proposed in this study. Using this algorithm, the images captured by a camera are first segmented with the binary features extracted from the effective ranges. Next, the feature images are transformed into their reverse perspectives in the top-view space and are combined with the inertial navigation system (INS) information of the unmanned vehicle. The continuous multi-frame feature image data undergo temporal-spatial information matching and fusion in the top-view space, thereby obtaining the lane feature images with abundant data within a wider range. Cluster analysis is then conducted on the feature image information after matching to achieve single-lane feature data clustering and to filter out interference. Using the least-squares curve-fitting algorithm, the lane feature data are fitted to a parametric curve after clustering. Finally, a model of the prediction is established between two adjacent frame images to achieve lane tracking prediction and to further improve the accuracy of the lane detection. Experimental results show that the superior advantages of the algorithm include strong anti-interference, high detection efficiency, and excellent algorithm stability and timelines.

Keywords