IEEE Access (Jan 2024)

Progressive Ensemble Learning for in-Sample Data Cleaning

  • Jung-Hua Wang,
  • Shih-Kai Lee,
  • Ting-Yuan Wang,
  • Ming-Jer Chen,
  • Shu-Wei Hsu

DOI
https://doi.org/10.1109/ACCESS.2024.3468035
Journal volume & issue
Vol. 12
pp. 140643 – 140659

Abstract

Read online

We present an ensemble learning-based data cleaning approach (touted as ELDC) capable of identifying and pruning anomaly data. ELDC is characterized in that an ensemble of base models can be trained directly with the noisy in-sample data and can dynamically provide clean data during the iterative training. Each base model uses a random subset of the target dataset that may initially contain up to 40% of label errors. Following each training iteration, anomaly data are discriminated against clean ones by a majority voting scheme, and three different types of anomaly (mislabeled, confusing, and outliers) can be identified using a statistical pattern jointly determined by the prediction output of the base models. By iterating such a cycle of train-vote-remove, noisy in-sample data are progressively removed until a prespecified condition is reached. Comprehensive experiments, including out-sample data tests, are conducted to verify the effectiveness of ELDC in simultaneously suppressing bias and variance of the prediction output. The ELDC framework is highly flexible as it is not bound to a specific model and allows different transfer-learning configurations. Neural networks of AlexNet, ResNet50, and GoogleNet are used as based models and trained with various benchmark datasets, the results show that ELDC outperforms state-of-the-art cleaning methods.

Keywords