IEEE Access (Jan 2020)
Uncertainty-Based Rejection Wrappers for Black-Box Classifiers
Abstract
Machine Learning as a Service platform is a very sensible choice for practitioners that want to incorporate machine learning to their products while reducing times and costs. However, to benefit their advantages, a method for assessing their performance when applied to a target application is needed. In this work, we present a robust uncertainty-based method for evaluating the performance of both probabilistic and categorical classification black-box models, in particular APIs, that enriches the predictions obtained with an uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneous predictions while protecting against out of distribution data points when deploying the model in a productive setting. We validate the proposal in different natural language processing and computer vision scenarios. Moreover, taking advantage of the computed uncertainty score, we show that one can significantly increase the robustness and performance of the resulting classification system by rejecting uncertain predictions.
Keywords