IEEE Access (Jan 2025)
Multi-View Prototypical Transport for Unsupervised Domain Adaptation
Abstract
Unsupervised Domain Adaptation (UDA) methods struggle to bridge the gap between a labeled source domain and an unlabeled target domain, particularly due to the rigidity of deep feature representations derived from the penultimate layer of backbone feature extractors. These deeper representations, while discriminative, often fail to generalize under distributional shifts due to their specificity. To overcome these limitations, we introduce a novel representation learning framework, Multi-view Prototypical Transport (MPT), which leverages a multi-view hypothesis model to integrate and utilize the general information present in shallower layers. This approach facilitates a more comprehensive understanding of the relationships among intermediate features. Additionally, our framework incorporates a novel multi-view prototypical learning strategy that not only transfers domain-general representations, but also significantly enhances robustness against target domain outliers. Extensive experimental evaluations on various benchmark datasets demonstrate that our method outperforms existing state-of-the-art UDA approaches, confirming the effectiveness of our strategy in adapting to complex domain shifts.
Keywords