Geomatics (Jun 2021)

Transfer Learning for LiDAR-Based Lane Marking Detection and Intensity Profile Generation

  • Ankit Patel,
  • Yi-Ting Cheng,
  • Radhika Ravi,
  • Yi-Chun Lin,
  • Darcy Bullock,
  • Ayman Habib

DOI
https://doi.org/10.3390/geomatics1020016
Journal volume & issue
Vol. 1, no. 2
pp. 287 – 309

Abstract

Read online

Recently, light detection and ranging (LiDAR)-based mobile mapping systems (MMS) have been utilized for extracting lane markings using deep learning frameworks. However, huge datasets are required for training neural networks. Furthermore, with accurate lane markings being detected utilizing LiDAR data, an algorithm for automatically reporting their intensity information is beneficial for identifying worn-out or missing lane markings. In this paper, a transfer learning approach based on fine-tuning of a pretrained U-net model for lane marking extraction and a strategy for generating intensity profiles using the extracted results are presented. Starting from a pretrained model, a new model can be trained better and faster to make predictions on a target domain dataset with only a few training examples. An original U-net model trained on two-lane highways (source domain dataset) was fine-tuned to make accurate predictions on datasets with one-lane highway patterns (target domain dataset). Specifically, encoder- and decoder-trained U-net models are presented wherein, during retraining of the former, only weights in the encoder path of U-net were allowed to change with decoder weights frozen and vice versa for the latter. On the test data (target domain), the encoder-trained model (F1-score: 86.9%) outperformed the decoder-trained (F1-score: 82.1%). Additionally, on an independent dataset, the encoder-trained one (F1-score: 90.1%) performed better than the decoder-trained one (F1-score: 83.2%). Lastly, on the basis of lane marking results obtained from the encoder-trained U-net, intensity profiles were generated. Such profiles can be used to identify lane marking gaps and investigate their cause through RGB imagery visualization.

Keywords