Исследования языка и современное гуманитарное знание (Nov 2021)
Speech translation vs. Interpreting
Abstract
Artificial intelligence (AI), deep learning technologies and big data have impacted on the interpretation market and AI-based technologies can be used in automated speech translation. The first experiments to create an automatic interpreter took place at the end of the 1980s and early 1990s. Today, there are several AI-based devices available on the market which attempt to fully automatize the interpreting process, both in the consecutive and in the simultaneous mode in a limited number of specific communication situations. This article first reviews the history and mechanism of automated interpreting and provides a comparison of human and automated interpreting. It also presents the main features and use cases of automated speech translation (AST). By showing that the two activities are intrinsically different, it argues that they need to be distinguished more clearly by defining the speech-to-speech (S2S) language transfer accomplished by computers as automated speech translation (AST) and keeping the term ‘interpreting’ for the human activity. Automated speech translation has an undeniable role and place in today’s world, steeped in technology and AI. However, it needs to be underlined that it is completely different from the complex interpreting service human interpreters provide and the circumstances and contexts in which its use can be advised is intrinsically different from that of human interpreting. Therefore, the real question is how AST and human interpreting can complement each other, in other words, what are the situations and contexts where AST is desired and applicable and when is there a need for human interpreting?