IEEE Access (Jan 2024)
PiPCS: Perspective Independent Point Cloud Simplifier for Complex 3D Indoor Scenes
Abstract
The emergence of commercially accessible depth sensors has driven the widespread adoption of 3D data, offering substantial benefits across diverse applications, ranging from human activity recognition to augmented reality. However, indoor environments present significant challenges for 3D computer vision applications, particularly in cluttered and dynamic scenes where background bounding surfaces hinder the detection and analysis of foreground objects. We introduce a novel perspective-independent point cloud simplifier (PiPCS) for complex 3D indoor scenes. PiPCS streamlines 3D computer vision workflows by contextually segmenting and subtracting background bounding surfaces and preserving segmented foreground objects within indoor scenes, effectively reducing the size of indoor point clouds and enhancing 3D indoor scene perception. Methodologically, we use estimated surface normals to intelligently divide an input point cloud into distinct zones, which are then split into multiple distinct parallel clusters. Next, we find the largest plane in each cluster and sort the fitted planes within each zone based on their distance along the zone’s normal vector to identify the bounding surfaces. Finally, we segment the 3D background, simplify the point cloud by employing a voxel-based background subtraction technique, and segment 3D foreground objects via a cluster-based segmentation approach. We evaluated PiPCS on the Stanford S3DIS dataset and our own challenging dataset, achieving average values of 97.08% for specificity and 91.27% for F1 score on the S3DIS dataset and size reductions averaging 74.11% overall. Our experimental and evaluation results demonstrate that PiPCS robustly simplifies and reduces the size of unorganized indoor point clouds.
Keywords