Ampersand (Dec 2023)

Cognitive processing of the extra visual layer of live captioning in simultaneous interpreting. Triangulation of eye-tracked process and performance data

  • Lu Yuan,
  • Binhua Wang

Journal volume & issue
Vol. 11
p. 100131

Abstract

Read online

While real-time automatic captioning has become available on various online meeting platforms, it poses additional cognitive challenges for interpreters because it adds an extra layer for information processing in interpreting. Against this background, this empirical study investigates the cognitive processing of live captioning in interpreting on Zoom Meetings. 13 interpreting trainees in a postgraduate professional training programme were recruited for an eye-tracking experiment of simultaneous interpreting under two conditions: with live captioning on and with live captioning off. Their eye movement data and interpreting performance data were collected during the experiment. Three questions were explored: 1) How do the interpreters process the additional layer of visual information from live captioning? 2) Which types of information segments tax more cognitive resources in interpreting with live captioning? 3) Is there a significant difference in interpreting accuracy between interpreting with live captioning and interpreting without live captioning? The results showed the following findings: 1) Although participants were observed to constantly shift their attention between the live transcript area and the non-live transcript area, they tended to consciously keep their visual attention to the live captioning area when numbers and proper names appeared. 2) With live captioning on, it required more cognitive effort to process the information containing a higher density of numbers and proper names than processing information without numbers and proper names. 3) There was a significant improvement in the number and proper name accuracy in interpreting with live captioning.

Keywords