PLoS ONE (Jan 2019)

Task-uninformative visual stimuli improve auditory spatial discrimination in humans but not the ideal observer.

  • Madeline S Cappelloni,
  • Sabyasachi Shivkumar,
  • Ralf M Haefner,
  • Ross K Maddox

DOI
https://doi.org/10.1371/journal.pone.0215417
Journal volume & issue
Vol. 14, no. 9
p. e0215417

Abstract

Read online

In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli-even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.