IEEE Access (Jan 2024)
3D Semantic Novelty Detection via Large-Scale Pre-Trained Models
Abstract
Shifting deep learning models from lab environments to real-world settings entails preparing them to handle unforeseen conditions, including the chance of encountering novel objects from classes that were not included in their training data. Such occurrences can pose serious threats in various applications. The task of Semantic Novelty detection has attracted significant attention in the last years mainly on 2D images, overlooking the complex 3D nature of the real-world. In this study, we address this gap by examining the geometric structures of objects within 3D point clouds to detect semantic novelty effectively. We advance the field by introducing 3D-SeND, a method that harnesses a large-scale pre-trained model to extract patch-based object representations directly from its intermediate feature representation. These patches are used to characterize each known class precisely. At inference, a normalcy score is obtained by assessing whether a test sample can be reconstructed predominantly from patches of a single known class or from multiple classes. We evaluate 3D-SeND on real-world point cloud samples when the reference known data are synthetic and demonstrate that it excels in both standard and few-shot scenarios. Thanks to its patch-based object representation, it is possible to visualize 3D-SeND’s predictions with a valuable explanation of the decision process. Moreover, the inherent training-free nature of 3D-SeND allows for its immediate application to a wide array of real-world tasks, offering a compelling advantage over approaches that require a task-specific learning phase. Our code is available at https://paolotron.github.io/3DSend.github.io.
Keywords