IEEE Access (Jan 2021)

Towards Constructing HMM Structure for Speech Recognition With Deep Neural Fenonic Baseform Growing

  • Lujun Li,
  • Tobias Watzel,
  • Ludwig Kurzinger,
  • Gerhard Rigoll

DOI
https://doi.org/10.1109/ACCESS.2021.3064197
Journal volume & issue
Vol. 9
pp. 39098 – 39110

Abstract

Read online

For decades, acoustic models in speech recognition systems pivot on Hidden Markov Models (HMMs), e.g., Gaussian Mixture Model-HMM system, Deep Neural Network-HMM system, etc., and achieve remarkable results. However, the popular HMM model is the three-state left-to-right structure, without the superiority certainty. There are multiple studies on the HMM structure’s optimization, but none of them addresses this problem leveraging deep learning algorithms. For the first time, this paper proposes a new training method based on Deep Neural Fenonic Baseform Growing to optimize the HMM structure, which is concisely designed and computationally cheap. Moreover, this data-driven method customizes the HMM structure for each phone precisely without external assumptions concerning the number of states or transition patterns. Experimental results on both TIMIT and TEDliumv2 corpora indicate that the proposed HMM structure improves both the monophone system and the triphone system substantially. Besides, its adoption further improves state-of-the-art speech recognition systems with remarkably reduced parameters.

Keywords