IEEE Access (Jan 2021)

Multi-View Collaborative Learning for Semi-Supervised Domain Adaptation

  • Ba Hung Ngo,
  • Ju Hyun Kim,
  • Yeon Jeong Chae,
  • Sung In Cho

DOI
https://doi.org/10.1109/ACCESS.2021.3136567
Journal volume & issue
Vol. 9
pp. 166488 – 166501

Abstract

Read online

Recently, Semi-supervised Domain Adaptation (SSDA) has become more practical because a small number of labeled target samples can significantly boost the empirical target performance when using SSDA. Several current methods focus on prototype-based alignment to achieve cross-domain invariance in which the labeled samples from the source and target domains are concatenated to estimate the prototypes. The model is then trained to assign the unlabeled target data to the prototype within the same class. However, such methods fail to exploit the advantage of using few labeled target data because the labeled source data dominate the prototypes in the supervision process. Moreover, a recent method (Yang et al., 2021) showed that concatenating source and target samples for training can damage the semantic information of representations, which degrades the trained model’s ability to generate discriminative features. To solve these problems, in this paper, we divide labeled source and target samples into two subgroups for training. One group includes a large number of labeled source samples, and the other obtains a few labeled target samples. Then, we propose a novel SSDA framework that consists of two models. A model trained on the group that has the labeled source samples to provide an “inter-view” on the unlabeled target data is called the inter-view model. A model trained on a few labeled target samples that provides an “intra-view” of the unlabeled target data is called the intra-view model. Finally, both of these models collaborate to fully exploit information on the unlabeled target data. To the best of our knowledge, our proposed method achieves the state-of-the-art classification performance of SSDA in extensive experiments conducted on several visual benchmark domain adaptation datasets that utilize the advantages of multiple views and collaborative training.

Keywords