Scientific Reports (May 2024)

Towards optimal model evaluation: enhancing active testing with actively improved estimators

  • JooChul Lee,
  • Likhitha Kolla,
  • Jinbo Chen

DOI
https://doi.org/10.1038/s41598-024-58633-3
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 11

Abstract

Read online

Abstract With rapid advancements in machine learning and statistical models, ensuring the reliability of these models through accurate evaluation has become imperative. Traditional evaluation methods often rely on fully labeled test data, a requirement that is becoming increasingly impractical due to the growing size of datasets. In this work, we address this issue by extending existing work on active testing (AT) methods which are designed to sequentially sample and label data for evaluating pre-trained models. We propose two novel estimators: the Actively Improved Levelled Unbiased Risk (AILUR) and the Actively Improved Inverse Probability Weighting (AIIPW) estimators which are derived from nonparametric smoothing estimation. In addition, a model recalibration process is designed for the AIIPW estimator to optimize the sampling probability within the AT framework. We evaluate the proposed estimators on four real-world datasets and demonstrate that they consistently outperform existing AT methods. Our study also shows that the proposed methods are robust to changes in subsample sizes, and effective at reducing labeling costs.