IEEE Access (Jan 2024)

BanSpeech: A Multi-Domain Bangla Speech Recognition Benchmark Toward Robust Performance in Challenging Conditions

  • Ahnaf Mozib Samin,
  • M. Humayon Kobir,
  • Md. Mushtaq Shahriyar Rafee,
  • M. Firoz Ahmed,
  • Mehedi Hasan,
  • Partha Ghosh,
  • Shafkat Kibria,
  • M. Shahidur Rahman

DOI
https://doi.org/10.1109/ACCESS.2024.3371478
Journal volume & issue
Vol. 12
pp. 34527 – 34538

Abstract

Read online

Despite huge improvements in automatic speech recognition (ASR) employing neural networks, ASR systems still suffer from a lack of robustness and generalizability issues due to domain shifting. This is mainly because principal corpus design criteria are often not identified and examined adequately while compiling ASR datasets. In this study, we investigate the robustness of the fully supervised convolutional neural networks (CNNs), and the state-of-the-art transfer learning approaches, namely self-supervised wav2vec 2.0 and weakly supervised Whisper for multi-domain ASR. We also demonstrate the significance of domain selection while building a corpus by assessing these models on a novel multi-domain Bangladeshi Bangla ASR evaluation benchmark—BanSpeech, which contains approximately 6.52 hours of human-annotated speech, totaling 8085 utterances, across 13 distinct domains. SUBAK.KO, a mostly read speech corpus for the morphologically rich language Bangla, has been used to train the ASR systems. Experimental evaluation reveals that self-supervised cross-lingual pre-training with wav2vec 2.0 is the best strategy compared to weak supervision and full supervision to tackle the multi-domain ASR task. Moreover, the ASR models trained on SUBAK.KO face difficulty recognizing speech from domains with mostly spontaneous speech. The BanSpeech is publicly available to meet the need for a challenging evaluation benchmark for Bangla ASR.1

Keywords