EURASIP Journal on Audio, Speech, and Music Processing (Nov 2018)

Web-based environment for user generation of spoken dialog for virtual assistants

  • Ryota Nishimura,
  • Daisuke Yamamoto,
  • Takahiro Uchiya,
  • Ichi Takumi

DOI
https://doi.org/10.1186/s13636-018-0142-8
Journal volume & issue
Vol. 2018, no. 1
pp. 1 – 13

Abstract

Read online

Abstract In this paper, a web-based spoken dialog generation environment which enables users to edit dialogs with a video virtual assistant is developed and to also select the 3D motions and tone of voice for the assistant. In our proposed system, “anyone” can “easily” post/edit contents of the dialog for the dialog system. The dialog type corresponding to the system is limited to the question-and-answer type dialog, in order to avoid editing conflicts caused by editing by multiple users. The spoken dialog sharing service and FST generator generates spoken dialog content for the MMDAgent spoken dialog system toolkit, which includes a speech recognizer, a dialog control unit, a speech synthesizer, and a virtual agent. For dialog content creation, question-and-answer dialogs posted by users and FST templates are used. The proposed system was operated for more than a year in a student lounge at the Nagoya Institute of Technology, where users added more than 500 dialogs during the experiment. Images were also registered to 65% of the postings. The most posted category is related to “animation, video games, manga.” The system was subjected to open examination by tourist information staff who had no prior experience with spoken dialog systems. Based on their impressions of tourist use of the dialog system, they shortened the length of some of the system’s responses and added pauses to the longer responses to make them easier to understand.

Keywords