IEEE Access (Jan 2022)

Unsupervised Domain Adaptation for 3D Point Clouds by Searched Transformations

  • Dongmin Kang,
  • Yeongwoo Nam,
  • Daeun Kyung,
  • Jonghyun Choi

DOI
https://doi.org/10.1109/ACCESS.2022.3176719
Journal volume & issue
Vol. 10
pp. 56901 – 56913

Abstract

Read online

Input-level domain adaptation reduces the burden of a neural encoder without supervision by reducing the domain gap at the input level. Input-level domain adaptation is widely employed in 2D visual domain, e.g., images and videos, but is not utilized for 3D point clouds. We propose the use of input-level domain adaptation for 3D point clouds, namely, point-level domain adaptation. Specifically, we propose to learn a transformation of 3D point clouds by searching the best combination of operations on point clouds that transfer data from the source domain to the target domain while maintaining the classification label without supervision of the target label. We decompose the learning objective into two terms, resembling domain shift and preserving label information. On the PointDA-10 benchmark dataset, our method outperforms state-of-the-art, unsupervised, point cloud domain adaptation methods by large margins (up to + 3.97 % in average).

Keywords