Frontiers in Human Neuroscience (Jun 2018)

Multimodal Communication in Aphasia: Perception and Production of Co-speech Gestures During Face-to-Face Conversation

  • Basil C. Preisig,
  • Basil C. Preisig,
  • Noëmi Eggenberger,
  • Dario Cazzoli,
  • Thomas Nyffeler,
  • Thomas Nyffeler,
  • Klemens Gutbrod,
  • Jean-Marie Annoni,
  • Jurka R. Meichtry,
  • Jurka R. Meichtry,
  • Tobias Nef,
  • René M. Müri,
  • René M. Müri,
  • René M. Müri

DOI
https://doi.org/10.3389/fnhum.2018.00200
Journal volume & issue
Vol. 12

Abstract

Read online

The role of nonverbal communication in patients with post-stroke language impairment (aphasia) is not yet fully understood. This study investigated how aphasic patients perceive and produce co-speech gestures during face-to-face interaction, and whether distinct brain lesions would predict the frequency of spontaneous co-speech gesturing. For this purpose, we recorded samples of conversations in patients with aphasia and healthy participants. Gesture perception was assessed by means of a head-mounted eye-tracking system, and the produced co-speech gestures were coded according to a linguistic classification system. The main results are that meaning-laden gestures (e.g., iconic gestures representing object shapes) are more likely to attract visual attention than meaningless hand movements, and that patients with aphasia are more likely to fixate co-speech gestures overall than healthy participants. This implies that patients with aphasia may benefit from the multimodal information provided by co-speech gestures. On the level of co-speech gesture production, we found that patients with damage to the anterior part of the arcuate fasciculus showed a higher frequency of meaning-laden gestures. This area lies in close vicinity to the premotor cortex and is considered to be important for speech production. This may suggest that the use of meaning-laden gestures depends on the integrity of patients’ speech production abilities.

Keywords