International Journal of Applied Earth Observations and Geoinformation (Dec 2022)

WSPointNet: A multi-branch weakly supervised learning network for semantic segmentation of large-scale mobile laser scanning point clouds

  • Xiangda Lei,
  • Haiyan Guan,
  • Lingfei Ma,
  • Yongtao Yu,
  • Zhen Dong,
  • Kyle Gao,
  • Mahmoud Reza Delavar,
  • Jonathan Li

Journal volume & issue
Vol. 115
p. 103129

Abstract

Read online

Semantic segmentation of large-scale mobile laser scanning (MLS) point clouds is essential for urban scene understanding. However, most of the existing semantic segmentation methods require a large quantity of labeled data, which are labor-intensive and time-consuming. To this end, we propose a multi-branch weakly supervised learning network (WSPointNet) to solve this challenge. Our method includes a basic weakly supervised framework and a multi-branch weakly supervised module. With input point clouds and few labels, the basic weakly supervised framework outputs the prediction values of the input point clouds and the underlying supervised signals of the whole network. Next, the multi-branch weakly supervised module explores the potential information of the unlabeled and labeled points while preventing model over-fitting. Concretely, the module includes an ensemble prediction constraint branch, a contrast-guided entropy regularization branch, and an adaptive pseudo-label learning branch. The ensemble prediction constraint branch aims to enhance the prediction stability of the point cloud. The contrast-guided entropy regularization branch is proposed to prevent model over-fitting by comparing the ensemble prediction labels with the current prediction labels. The adaptive pseudo-label learning branch provides efficient and adaptive supervised signals for model training by the consistency cost and ensemble prediction. Extensive experiments conducted on two MLS benchmarks showed that our WSPointNet achieved a promising semantic segmentation performance with sparse annotated points. For the public Toronto3D dataset, with only 0.1% labeled points, our WSPointNet obtained an overall accuracy of 96.76% and a mIoU of 78.96%, which outperformed most of comparative fully supervised methods.

Keywords