IEEE Access (Jan 2023)

Lidar Annotation Is All You Need

  • Dinar Sharafutdinov,
  • Stanislav Kuskov,
  • Saian Protasov,
  • Alexey Voropaev

DOI
https://doi.org/10.1109/ACCESS.2023.3337995
Journal volume & issue
Vol. 11
pp. 135820 – 135830

Abstract

Read online

In recent years, computer vision has transformed fields such as medical imaging, object recognition, and geospatial analytics. One of the fundamental tasks in computer vision is semantic image segmentation, which is vital for precise object delineation. Autonomous driving represents one of the key areas where computer vision algorithms are applied. The task of road surface segmentation is crucial in self-driving systems, but it requires a labor-intensive annotation process in several data domains. The work described in this paper aims to improve the efficiency of image segmentation using a convolutional neural network in a multi-sensor setup. This approach leverages lidar (Light Detection and Ranging) annotations to directly train image segmentation models on RGB images. Lidar supplements the images by emitting laser pulses and measuring reflections to provide depth information. However, lidar’s sparse point clouds often create difficulties for accurate object segmentation. Segmentation of point clouds requires time-consuming preliminary data preparation and a large amount of computational resources. The key innovation of our approach is the masked loss, addressing sparse ground-truth masks from lidar point clouds. By calculating loss exclusively where lidar points exist, the model learns road segmentation on images by using lidar points as ground truth. This approach allows for seamless blending of different ground-truth data types during model training. Experimental validation of the approach on benchmark datasets shows comparable performance to a high-quality image segmentation model. Incorporating lidar reduces the load on annotations and enables training of image-segmentation models without loss of segmentation quality. The methodology is tested by experiments with diverse datasets, both publicly available and proprietary. The strengths and weaknesses of the proposed method are also discussed. The work facilitates efficient use of point clouds for image model training by advancing neural network training for image segmentation.

Keywords