IEEE Access (Jan 2021)

Smart Stacking of Deep Learning Models for Granular Joint Intent-Slot Extraction for Multi-Intent SLU

  • Niraj Kumar,
  • Bhiman Kumar Baghel

DOI
https://doi.org/10.1109/ACCESS.2021.3095416
Journal volume & issue
Vol. 9
pp. 97582 – 97590

Abstract

Read online

These days’ multi-intent utterances have become very important for the spoken language understanding (SLU). The multi-intent systems and algorithms add more complexity (Compare to the single-intent-based system) to the SLU. As, it requires an accurate system, which can identify intents and slots at fine-grain (i.e., word/token) level and also able to handle the relation between intents and slots locally at utterance level. In this case, intents may belong to multiple domains and multiple different classes. Similarly, slots may also belong to multiple different classes, and slots of the same class may be related to multiple different intent classes. Unfortunately, very few works have been done till now to address these issues at the fine-grain level. To solve this problem, we propose a smart stacking-ensemble strategy. The first stage of this system uses a combination of three different types of powerful multitasking NLP models, developed on top of pre-trained BERT, XLNet, and Elmo. Finally, a stacking ensemble layer learns to predict the best possible results. We have evaluated our model on four publicly available datasets. The evaluation results on the state-of-the-art public datasets show that our devised system outperforms the existing multi-intent-based systems at token-level and sentence-level.

Keywords