EURASIP Journal on Audio, Speech, and Music Processing (Jul 2024)

Adaptive multi-task learning for speech to text translation

  • Xin Feng,
  • Yue Zhao,
  • Wei Zong,
  • Xiaona Xu

DOI
https://doi.org/10.1186/s13636-024-00359-1
Journal volume & issue
Vol. 2024, no. 1
pp. 1 – 9

Abstract

Read online

Abstract End-to-end speech to text translation aims to directly translate speech from one language into text in another, posing a challenging cross-modal task particularly in scenarios of limited data. Multi-task learning serves as an effective strategy for knowledge sharing between speech translation and machine translation, which allows models to leverage extensive machine translation data to learn the mapping between source and target languages, thereby improving the performance of speech translation. However, in multi-task learning, finding a set of weights that balances various tasks is challenging and computationally expensive. We proposed an adaptive multi-task learning method to dynamically adjust multi-task weights based on the proportional losses incurred during training, enabling adaptive balance in multi-task learning for speech to text translation. Moreover, inherent representation disparities across different modalities impede speech translation models from harnessing textual data effectively. To bridge the gap across different modalities, we proposed to apply optimal transport in the input of end-to-end model to find the alignment between speech and text sequences and learn the shared representations between them. Experimental results show that our method effectively improved the performance on the Tibetan-Chinese, English-German, and English-French speech translation datasets.

Keywords