IET Signal Processing (Jun 2023)

Nonspeech7k dataset: Classification and analysis of human non‐speech sound

  • Muhammad Mamunur Rashid,
  • Guiqing Li,
  • Chengrui Du

DOI
https://doi.org/10.1049/sil2.12233
Journal volume & issue
Vol. 17, no. 6
pp. n/a – n/a

Abstract

Read online

Abstract Human non‐speech sounds occur during expressions in a real‐life environment. Realising a person's incapability to prompt confident expressions by non‐speech sounds may assist in identifying premature disorder in medical applications. A novel dataset named Nonspeech7k is introduced that contains a diverse set of human non‐speech sounds, such as the sounds of breathing, coughing, crying, laughing, screaming, sneezing, and yawning. The authors then conduct a variety of classification experiments with end‐to‐end deep convolutional neural networks (CNN) to show the performance of the dataset. First, a set of typical deep classifiers are used to verify the reliability and validity of Nonspeech7k. Involved CNN models include 1D‐2D deep CNN EnvNet, deep stack CNN M11, deep stack CNN M18, intense residual block CNN ResNet34, modified M11 named M12, and the authors’ baseline model. Among these, M12 achieves the highest accuracy of 79%. Second, to verify the heterogeneity of Nonspeech7k with respect to two typical datasets, FSD50K and VocalSound, the authors design a series of experiments to analyse the classification performance of deep neural network classifier M12 by using FSD50K, FSD50K + Nonspeech7k, VocalSound, VocalSound + Nonspeech7k as training data, respectively. Experimental results show that the classifier trained with existing datasets mixed with Nonspeech7k achieves the highest accuracy improvement of 15.7% compared to that without Nonspeech7k mixed. Nonspeech7k is 100% annotated, completely checked, and free of noise. It is available at https://doi.org/10.5281/zenodo.6967442.

Keywords