TIPA. Travaux interdisciplinaires sur la parole et le langage (Jun 2020)

Le rôle des gestes dans les explications lexicales par visioconférence

  • Benjamin Holt

DOI
https://doi.org/10.4000/tipa.3458
Journal volume & issue
Vol. 36

Abstract

Read online

Coverbal gestures are tightly linked to spoken language and play a vital role in communication. Speakers have a tendency to produce more gestures when their interlocutor is visible, such as during face-to-face communication, or when they know that they can be seen, such as in a videoconferencing environment. Gestures are especially useful to teachers, who tend to be more aware of the gestures that they produce and who use them to fulfill various pedagogical functions. These so-called teaching gestures are particularly useful for lexical explanations, where a gesture can be associated with a word. Gestures have been shown to help enhance word memorization, especially if the learner reproduces the gesture.Lexical explanations occur frequently during foreign language teaching, both in physical classrooms and in online videoconferencing environments. Empirical evidence has shown that negotiation sequences, which are conducive to language acquisition, are primarily triggered by lexical items. It is therefore interesting to analyze the role that gestures play during these videoconferenced lexical explanation sequences.Explicit lexical explanation sequences are important for beginner and intermediate learners because their vocabularies are not yet rich enough to learn new words incidentally by reading or by listening. During a lexical explanation sequence, teachers use various techniques to focus on different aspects of the lexical item. The classic model of a lexical explanation sequence is comprised of three steps: the opening phase, during which the unknown lexical item is identified and highlighted; the core, during which the lexical item is explained; and the closing phase, during which the learner’s comprehension is ratified.The objective of a lexical explanation sequence is to help the learner to know the lexical item in question. Lexical knowledge can be broken down in many different ways, but for our purposes and in most general terms, knowing a lexical item involves knowing its oral and written forms, its meanings, and its usage constraints. The form includes written form (spelling) and spoken form (pronunciation). Meaning involves what is signified by the word and what it refers to, as well as the other words that are associated with it. Usage refers to grammatical functions, collocations, and constraints on usage such as frequency and register. Some or all of these three main facets of lexical knowledge are dealt with during lexical explanation sequences.According to the multimodal perspective that we have adopted, interlocutors make use of a variety of semiotic resources during lexical explanation sequences such as oral and written language, prosody, gestures, facial expressions, posture, and others. No single resource is always more important than another, and choices are made depending on the interactional context and on the semiotic environment. The videoconferencing platform allows for a wide range of modes to be used including gestures, as long as the interlocutors make their gestures visible in front of the webcam. Gestures produced in a pedagogical environment are known to fulfill cognitive, emotional, organizational, informational, and evaluative roles, among others. This study aims to describe the roles that gestures play during lexical explanation sequences that occur during videoconferencing sessions.Our data come from a semester-long videoconferencing project that took place during the fall of 2013 between 12 master’s students in France who were training to become French teachers and 18 undergraduate business students in Ireland who were learning French for business purposes in preparation for an internship the following year in France. The students had an intermediate level of French and the weekly interactions lasted between 25 and 40 minutes. Consent form responses meant that 7 French master’s students and 12 Irish undergraduates were retained for the study.After aligning all of the various audio and video recordings using video editing software, all spoken and written language was transcribed using multimodal annotation software. From 15.5 hours of video, 295 lexical explanation sequences were identified and selected for study. About half of the lexical explanation sequences were triggered by comprehension problems, and the other half were initiated by production problems. The teacher trainees initiated the lexical explanations slightly more frequently than the students. Each lexical explanation sequence was divided into the three phases listed above (opening, core, and closing). Aside from the written and spoken language, multimodal phenomena such as posture, facial expressions, eyebrow movements, hand gestures, and document sharing were annotated.Although a large number of gestures were produced outside the field of view of the webcam, we were able to analyze the role that visible gestures played in the lexical explanation sequences. We postulate that the gestures that were produced in the limited space in front of the webcam were done so with communicative and pedagogical intent. The teacher trainees seem to think that gestures constitute a valuable explanation strategy because they are often used during lexical explanation sequences. Our examples show that teacher trainees used their gestures to explain all of the main facets of word comprehension: form, meaning, and usage. In addition to this, they used their gestures to help involve the learners in the explicative process, which facilitates the acquisition of lexical knowledge.Our examples show the utility of gestures in explaining different facets of lexical items in this particular multimodal environment, but due to the lack of a posttest, we are not able to make any claims about acquisition or draw any conclusions about which gestures or types of explanations work better than others. Furthermore, we are not able to ascertain whether or not the learners pay attention to the gestures that teachers produce in front of the webcam. Future research should address this.

Keywords