Applied Sciences (Aug 2024)

Class-Center-Based Self-Knowledge Distillation: A Simple Method to Reduce Intra-Class Variance

  • Ke Zhong,
  • Lei Zhang,
  • Lituan Wang,
  • Xin Shu,
  • Zizhou Wang

DOI
https://doi.org/10.3390/app14167022
Journal volume & issue
Vol. 14, no. 16
p. 7022

Abstract

Read online

Recent inter-sample self-distillation methods that spread knowledge across samples further improve the performance of deep models on multiple tasks. However, their existing implementations introduce additional sampling and computational overhead. Therefore, in this work, we propose a simple improved algorithm, the center self-distillation, which achieves a better effect with almost no additional computational cost. The design process for it has two steps. First, we show using a simple visualization design that the inter-sample self-distillation results in a denser distribution of samples with identical labels in the feature space. And, the key to its effectiveness is that it reduces the intra-class variance of features through mutual learning between samples. This brings us to the idea of providing a soft target for each class as the center for all samples within that class to learn from. Then, we propose to learn class centers and consequently compute class predictions for constructing these soft targets. In particular, to prevent over-fitting arising from eliminating intra-class variation, the specific soft target for each sample is customized by fusing the corresponding class prediction with that sample’s prediction. This is helpful in mitigating overconfident predictions and can drive the network to produce more meaningful and consistent predictions. The experimental results of various image classification tasks show that this simple yet powerful approach can not only reduce intra-class variance but also greatly improve the generalization ability of modern convolutional neural networks.

Keywords