IEEE Access (Jan 2023)
Domain-Adaptive Vision Transformers for Generalizing Across Visual Domains
Abstract
Deep-learning models often struggle to generalize well to unseen domains because of the distribution shift between the training and real-world data. Domain generalization aims to train models that can acquire general features from data across different domains, thereby improving the performance on unseen domains. Inspired by the glance-and-gaze approach, which mimics the way humans perceive the real world, we introduce the domain-adaptive vision transformer (DA-ViT) model, which adopts a human cognitive perspective for domain generalization. We merge glance and gaze blocks to initially capture general information from each block and subsequently acquire more detailed and focused information. Unlike previous methods that predominantly employ convolutional neural networks, we adapted the ViT model to learn features that are robust across different visual domains. DA-ViT is pretrained on the ImageNet 1K dataset and designed to adaptively learn features that are generalizable across various visual domains. We evaluated our adapted model for domain generalization and demonstrated that it outperforms the ResNet50 model based on non-ensemble algorithms by $0.7\%p$ on the VLCS benchmark dataset. Our proposed model introduces a new approach for domain generalization that leverages the capabilities of vision transformers to adapt effectively to diverse visual domains.
Keywords