IEEE Access (Jan 2022)

An Efficient Approach to Select Instances in Self-Training and Co-Training Semi-Supervised Methods

  • Karliane Medeiros Ovidio Vale,
  • Arthur Costa Gorgonio,
  • Flavius Da Luz E Gorgonio,
  • Anne Magaly De Paula Canuto

DOI
https://doi.org/10.1109/ACCESS.2021.3138682
Journal volume & issue
Vol. 10
pp. 7254 – 7276

Abstract

Read online

Semi-supervised learning is a machine learning approach that integrates supervised and unsupervised learning mechanisms. In this learning, most of labels in the training set are unknown, while there is a small part of data that has known labels. The semi-supervised learning is attractive due to its potential to use labeled and unlabeled data to perform better than supervised learning. This paper consists of a study in the field of semi-supervised learning and implements changes on two well-known semi-supervised learning algorithms: self-training and co-training. In the literature, it is common to develop researches that change the structure of these algorithms, however, none of them proposes automating the labeling process of unlabeled instances, which is the main purpose of this work. In order to achieve this goal, three methods are proposed: FlexCon-G, FlexCon and FlexCon-C. The main difference among these methods is the way in which the confidence rate is calculated and the strategy used to select a label in each iteration. In order to evaluate the proposed methods’ performance, an empirical analysis is conducted, in which the performance of these methods has been evaluated on 30 datasets with different characteristics. The obtained results indicate that all three proposed methods perform better than the original self-training and co-training methods, in most analysed cases.

Keywords