PLoS ONE (Jan 2012)

A supramodal neural network for speech and gesture semantics: an fMRI study.

  • Benjamin Straube,
  • Antonia Green,
  • Susanne Weis,
  • Tilo Kircher

DOI
https://doi.org/10.1371/journal.pone.0051207
Journal volume & issue
Vol. 7, no. 11
p. e51207

Abstract

Read online

In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (-) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G-), while during the acoustic control condition a foreign language was presented (S-). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.