Crossmodal benefits to vocal emotion perception in cochlear implant users
Celina Isabelle von Eiff,
Sascha Frühholz,
Daniela Korth,
Orlando Guntinas-Lichius,
Stefan Robert Schweinberger
Affiliations
Celina Isabelle von Eiff
Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany; Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany; DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany; Corresponding author
Sascha Frühholz
Department of Psychology (Cognitive and Affective Neuroscience), Faculty of Arts and Social Sciences, University of Zurich, 8050 Zurich, Switzerland; Department of Psychology, University of Oslo, 0373 Oslo, Norway
Daniela Korth
Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
Orlando Guntinas-Lichius
Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
Stefan Robert Schweinberger
Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany; Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany; DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany
Summary: Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)—disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users’ VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.