IEEE Access (Jan 2020)

Linear Discriminant Analysis via Pseudo Labels: A Unified Framework for Visual Domain Adaptation

  • Rakesh Kumar Sanodiya,
  • Leehter Yao

DOI
https://doi.org/10.1109/ACCESS.2020.3035422
Journal volume & issue
Vol. 8
pp. 200073 – 200090

Abstract

Read online

This paper deals with the problem of visual domain adaptation in which source domain labeled data is available for training, but the target domain unlabeled data is available for testing. Many recent domain adaptation methods merely concentrate on extracting domain-invariant features via minimizing the distributional and geometrical divergence between domains simultaneously while ignoring within-class and between-class structure properties, especially for the target domain due to the unavailability of labeled data. We propose Linear Discriminant Analysis via Pseudo Labels (LDAPL), a unified framework for visual domain adaptation that can tackle these two issues together. LDAPL is to learn a domain-invariant features across both domains with preserving important properties such as minimizing the shift between domains both statistically and geometrically, retaining the original similarity of data samples, maximizing the target domain variance, and minimizing the within-class and maximizing the between-class properties of both domains. Specifically, LDAPL preserves the target domain discriminative information (or within-class and between-class properties) using pseudo labels, and these pseudo labels are refined until convergence. Extensive experiment on several visual cross-domain benchmarks, including Office+Caltech10 with all three types of features (such as Speeded Up Robust Features (SURF), Deep Convolutional Activation Feature (DeCAF6), and Visual Geometry Group-Fully Connected layer ( VGG-FC6 ) features), COIL20 (Columbia Object Image Library), digit, and PIE (Pose, Illumination, and Expression), LDAPL achieved average accuracies of 79.11%, 99.72%, 79.0%, and 84.50%, respectively. Comparative results on several visual cross-domain classification tasks verify that LDAPL can significantly outperform the state-of-the-art primitive and domain adaptation methods. Specifically, LDAPL gains over baseline Joint Geometrical and Statistical Alignment (JGSA) method with 6.6%, 5.3%, 6.3%, and 44.93% average accuracies on Office+Caltech10 (SURF, DeCAF6, and VGG-FC6 ), COIL20, digit, and PIE, respectively.

Keywords