PLoS ONE (Jan 2016)

Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

  • Ruben Zazo,
  • Alicia Lozano-Diez,
  • Javier Gonzalez-Dominguez,
  • Doroteo T Toledano,
  • Joaquin Gonzalez-Rodriguez

DOI
https://doi.org/10.1371/journal.pone.0146917
Journal volume & issue
Vol. 11, no. 1
p. e0146917

Abstract

Read online

Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.