Informatics (Mar 2019)

Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice

  • Carlos S. C. Teixeira,
  • Joss Moorkens,
  • Daniel Turner,
  • Joris Vreeke,
  • Andy Way

DOI
https://doi.org/10.3390/informatics6010013
Journal volume & issue
Vol. 6, no. 1
p. 13

Abstract

Read online

Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid translation from scratch, translation using translation memories, or machine translation postediting, we developed a web-based translation editing interface that permits multimodal input via touch-enabled screens and speech recognition in addition to keyboard and mouse. The tool also conforms to web accessibility standards. This article describes the tool and its development process over several iterations. Between these iterations we carried out two usability studies, also reported here. Findings were promising, albeit somewhat inconclusive. Participants liked the tool and the speech recognition functionality. Reports of the touchscreen were mixed, and we consider that it may require further research to incorporate touch into a translation interface in a usable way.

Keywords