IEEE Access (Jan 2019)
End-to-End Visual Domain Adaptation Network for Cross-Domain 3D CPS Data Retrieval
Abstract
3D CPS (Cyber Physical System) data has been widely generated and utilized for multiple applications, e.g. autonomous driving, unmanned aerial vehicle and so on. For large-scale 3D CPS data analysis, 3D object retrieval plays a significant role for urban perception. In this paper, we propose an end-to-end domain adaptation framework for cross-domain 3D objects retrieval (C3DOR-Net), which learns a joint embedding space for 3D objects from different domains in an end-to-end manner. Specifically, we focus on the unsupervised case when 3D objects in the target domain are unlabeled. To better encode a 3D object, the proposed method learns multi-view visual features in a data-driven manner for 3D object representation. Then, the domain adaptation strategy is implemented to benefit both domain alignment and final classification. Specifically, an center-based discriminative feature learning method enables the domain invariant features with better intra-class compactness and inter-class separability. C3DOR-Net can achieve remarkable retrieval performances by maximizing the inter-class divergence and minimizing the intra-class divergence. We evaluate our method on two cross-domain protocols: 1) CAD-to-CAD object retrieval on two popular 3D datasets (NTU and PSB) in three designed cross-domain scenarios; 2) SHREC’19 monocular image based 3D object retrieval. Experimental results demonstrate that our method can significantly boost the cross-domain retrieval performances.
Keywords