IET Computer Vision (Feb 2023)

Domain‐specific feature recalibration and alignment for multi‐source unsupervised domain adaptation

  • Mengzhu Wang,
  • Dingyao Chen,
  • Fangzhou Tan,
  • Tianyi Liang,
  • Long Lan,
  • Xiang Zhang,
  • Zhigang Luo

DOI
https://doi.org/10.1049/cvi2.12126
Journal volume & issue
Vol. 17, no. 1
pp. 26 – 38

Abstract

Read online

Abstract Traditional unsupervised domain adaptation (UDA) usually assumes that the source domain has labels and the target domain has no labels. In a real environment, labelled source domain data usually comes from multiple different distributions. To handle this problem, multi‐source unsupervised domain adaptation (MUDA) is proposed. Multi‐source unsupervised domain adaptation aims to adapt the model trained on multi‐labelled source domains to the unlabelled target domain. In this paper, a novel MUDA method by domain‐specific feature recalibration and alignment (FRA) is proposed. Specifically, to achieve feature recalibration, the authors leverage channel attention to pick out significant channels and spatial attention to focus on important features in different channels. Such integration of channel and spatial attention can lead to effective domain‐specific feature recalibration that may be of great importance to MUDA. In addition, to achieve better MUDA, the authors propose domain‐specific feature alignment which consists of Maximum Mean Discrepancy and JS‐divergence loss. Maximum Mean Discrepancy can reduce the difference between the source domain and target domain. Meanwhile, JS‐divergence loss may ensure the prediction consistency of different classifiers in the source domains. Four experiments have proved that FRA can achieve significantly better results in popular benchmarks for MUDA.