PLoS ONE (Jan 2020)

Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions.

  • Cristóbal Pagán Cánovas,
  • Javier Valenzuela,
  • Daniel Alcaraz Carrión,
  • Inés Olza,
  • Michael Ramscar

DOI
https://doi.org/10.1371/journal.pone.0233892
Journal volume & issue
Vol. 15, no. 6
p. e0233892

Abstract

Read online

The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occurrence frequency for a subset of linguistic expressions in American English. First, we objectively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Second, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication.