IEEE Access (Jan 2023)

Mixup Feature: A Pretext Task Self-Supervised Learning Method for Enhanced Visual Feature Learning

  • Jiashu Xu,
  • Sergii Stirenko

DOI
https://doi.org/10.1109/ACCESS.2023.3301561
Journal volume & issue
Vol. 11
pp. 82400 – 82409

Abstract

Read online

Self-supervised learning has emerged as an increasingly popular research topic within the field of computer vision. In this study, we propose a novel self-supervised learning approach based on Mixup features as pretext tasks. The proposed method aims to learn visual representations by predicting the Mixup-Feature of a masked image, which serves as a proxy for higher-level semantic information. Specifically, we investigate the efficacy of Mixup features as the prediction target for self-supervised learning. By setting the hyperparameter $\lambda $ through Mixup operations, pairwise combinations of Sobel edge feature maps, HOG feature maps, and LBP feature maps are created. We employ the vision transformer as the backbone network, drawing inspiration from masked autoencoders (MAE). We evaluate the proposed method on three benchmark datasets, namely Cifar-10, Cifar-100, and STL-10, and compare it with other state-of-the-art self-supervised learning approaches. The experiments demonstrate that mixed HOG-Sobel feature maps after Mixup achieve the best results in fine-tuning experiments on Cifar-10 and STL-10. Furthermore, compared to contrastive learning-based self-supervised learning methods, our approach proves to be more efficient, with shorter training durations and no reliance on data augmentation. When compared to generative self-supervised learning approaches based on MAE, the average performance improvement is 0.4%. Overall, the proposed self-supervised learning method based on Mixup features offers a promising direction for future research in the computer vision domain and has the potential to enhance performance across various downstream tasks. Our code will be published in GitHub.

Keywords