Applied Sciences (Aug 2024)

Enhancing Task-Oriented Dialogue Modeling through Coreference-Enhanced Contrastive Pre-Training

  • Yi Huang,
  • Si Chen,
  • Yaqin Chen,
  • Junlan Feng,
  • Chao Deng

DOI
https://doi.org/10.3390/app14177614
Journal volume & issue
Vol. 14, no. 17
p. 7614

Abstract

Read online

Pre-trained language models (PLMs) are proficient at understanding context in plain text but often struggle with the nuanced linguistics of task-oriented dialogues. The information exchanges in dialogues and the dynamic role-shifting of speakers contribute to complex coreference and interlinking phenomena across multi-turn interactions. To address these challenges, we propose Coreference-Enhanced Contrastive Pre-training (CECPT), an innovative pre-training framework specifically designed to enhance dialogue modeling. CECPT utilizes unsupervised dialogue datasets to capture both semantic richness and structural coherence. Our experimental results demonstrate that the CECPT model significantly outperforms established baselines in three critical applications: intent recognition, dialogue act prediction, and dialogue state tracking. These findings suggest that CECPT is more adept at following the information flow within dialogues and accurately linking statuses to their respective references.

Keywords