PLoS ONE (Jan 2020)

The data representativeness criterion: Predicting the performance of supervised classification based on data set similarity.

  • Evelien Schat,
  • Rens van de Schoot,
  • Wouter M Kouw,
  • Duco Veen,
  • Adriënne M Mendrik

DOI
https://doi.org/10.1371/journal.pone.0237009
Journal volume & issue
Vol. 15, no. 8
p. e0237009

Abstract

Read online

In a broad range of fields it may be desirable to reuse a supervised classification algorithm and apply it to a new data set. However, generalization of such an algorithm and thus achieving a similar classification performance is only possible when the training data used to build the algorithm is similar to new unseen data one wishes to apply it to. It is often unknown in advance how an algorithm will perform on new unseen data, being a crucial reason for not deploying an algorithm at all. Therefore, tools are needed to measure the similarity of data sets. In this paper, we propose the Data Representativeness Criterion (DRC) to determine how representative a training data set is of a new unseen data set. We present a proof of principle, to see whether the DRC can quantify the similarity of data sets and whether the DRC relates to the performance of a supervised classification algorithm. We compared a number of magnetic resonance imaging (MRI) data sets, ranging from subtle to severe difference is acquisition parameters. Results indicate that, based on the similarity of data sets, the DRC is able to give an indication as to when the performance of a supervised classifier decreases. The strictness of the DRC can be set by the user, depending on what one considers to be an acceptable underperformance.