IEEE Access (Jan 2020)

Knowledge Distillation in Acoustic Scene Classification

  • Jee-Weon Jung,
  • Hee-Soo Heo,
  • Hye-Jin Shim,
  • Ha-Jin Yu

DOI
https://doi.org/10.1109/ACCESS.2020.3021711
Journal volume & issue
Vol. 8
pp. 166870 – 166879

Abstract

Read online

Common acoustic properties that different classes share degrades the performance of acoustic scene classification systems. This results in a phenomenon where a few confusing pairs of acoustic scenes dominate a significant proportion of all misclassified audio segments. In this article, we propose adopting a knowledge distillation framework that trains deep neural networks using soft labels. Soft labels, extracted from another pre-trained deep neural network, are used to reflect the similarity between different classes that share similar acoustic properties. We also propose utilizing specialist models to provide additional soft labels. Each specialist model in this study refers to a deep neural network that concentrates on discriminating a single pair of acoustic scenes that are frequently misclassified. Self multi-head attention is explored for training specialist deep neural networks to further concentrate on target pairs of classes. The goal of this article is to train a single deep neural network that demonstrates performance equivalent to, or higher than, an ensemble of multiple models, by distilling the knowledge from several models. Diverse experiments conducted using the detection and classification of acoustic scenes and events 2019 task 1-a dataset demonstrate that the knowledge distillation framework is effective in acoustic scene classification. Specialist models successfully decrease the number of misclassified audio segments in the target classes. The final single model with the proposed method that is trained by the proposed knowledge distillation from several models, including specialists trained using an attention mechanism, shows a classification accuracy of 77.63 %, higher than an ensemble of the baseline and multiple specialists.

Keywords