IEEE Access (Jan 2020)

Semi-Supervised Learning for Fine-Grained Classification With Self-Training

  • Obed Tettey Nartey,
  • Guowu Yang,
  • Jinzhao Wu,
  • Sarpong Kwadwo Asare

DOI
https://doi.org/10.1109/ACCESS.2019.2962258
Journal volume & issue
Vol. 8
pp. 2109 – 2121

Abstract

Read online

Semi-supervised learning is a machine learning approach that tackles the challenge of having a large set of unlabeled data and few labeled ones. In this paper we adopt a semi-supervised self-training method to increase the amount of training data, prevent overfitting and improve the performance of deep models by proposing a novel selection algorithm that prevents mistake reinforcement which is a common thing in conventional self-training models. The model leverages, unlabeled data and specifically, after each training, we first generate pseudo-labels on the unlabeled set to be added to the labeled training samples. Next, we select the top-k most-confident pseudo-labeled images from each unlabeled class with their pseudo-labels and update the training data, and retrain the network on the updated training data. The method improves the accuracy in two-fold; bridging the gap in the appearance of visual objects, and enlarging the training set to meet the demands of deep models. We demonstrated the effectiveness of the model by conducting experiments on four state-of-the-art fine-grained datasets, which include Stanford Dogs, Stanford Cars, 102-Oxford flowers, and CUB-200-2011. We further evaluated the model on some coarse-grain data. Experimental results clearly show that our proposed framework has better performance than some previous works on the same data; the model obtained higher classification accuracy than most of the supervised learning models.

Keywords