International Journal of Applied Earth Observations and Geoinformation (Jul 2023)
Deep learning-based semantic segmentation of urban-scale 3D meshes in remote sensing: A survey
Abstract
Semantic segmentation in 3D meshes is the classification of its constituent element(s) into specific classes or categories. Using the powerful feature extraction abilities of deep neural networks (DNNs), significant results have been obtained in the semantic segmentation of various remotely sensed data formats. With the increased utilization of DNNs to segment remotely sensed data, there have been commensurate in-depth reviews and surveys summarizing the various learning-based techniques and methodologies that entail these methods. However, most of these surveys focused on methods that involve popular data formats like LiDAR point clouds, synthetic aperture radar (SAR) images, and hyperspectral images (HSI) while 3D meshes hardly received any attention. In this paper, to our best knowledge, we present the first comprehensive and contemporary survey of recent advances in utilizing deep learning techniques for the semantic segmentation of urban-scale 3D meshes. We first describe the different approaches employed by mesh-based learning methods to generalize and implement learning techniques on the mesh surface, and then describe how the element-wise classification tasks are achieved through these methods. We also provide an in-depth discussion and comparative analysis of the surveyed methods followed by a summary of the benchmark large-scale mesh datasets accompanied with the evaluation metrics for assessing the segmentation performance of the methods. Finally, we summarize some of the contemporary problems of the field and provide future research directions that may help researchers in the community.