IEEE Access (Jan 2023)

A WAV2VEC2-Based Experimental Study on Self-Supervised Learning Methods to Improve Child Speech Recognition

  • Rishabh Jain,
  • Andrei Barcovschi,
  • Mariam Yahayah Yiwere,
  • Dan Bigioi,
  • Peter Corcoran,
  • Horia Cucu

DOI
https://doi.org/10.1109/ACCESS.2023.3275106
Journal volume & issue
Vol. 11
pp. 46938 – 46948

Abstract

Read online

Despite recent advancements in deep learning technologies, Child Speech Recognition remains a challenging task. Current Automatic Speech Recognition (ASR) models require substantial amounts of annotated data for training, which is scarce. In this work, we explore using the ASR model, wav2vec2, with different pretraining and finetuning configurations for self-supervised learning (SSL) toward improving automatic child speech recognition. The pretrained wav2vec2 models were finetuned using different amounts of child speech training data, adult speech data, and a combination of both, to discover the optimum amount of data required to finetune the model for the task of child ASR. Our trained model achieves the best Word Error Rate (WER) of 7.42 on the MyST child speech dataset, 2.91 on the PFSTAR dataset and 12.77 on the CMU KIDS dataset using cleaned variants of each dataset. Our models outperformed the unmodified wav2vec2 BASE 960 on child speech using as little as 10 hours of child speech data in finetuning. The analysis of different types of training data and their effect on inference is provided by using a combination of custom datasets in pretraining, finetuning and inference. These ‘cleaned’ datasets are provided for use by other researchers to provide comparisons with our results.

Keywords