Segregation of complex acoustic scenes based on temporal coherence
Sundeep Teki,
Maria Chait,
Sukhbinder Kumar,
Shihab Shamma,
Timothy D Griffiths
Affiliations
Sundeep Teki
Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
Maria Chait
UCL Ear Institute, University College London, London, United Kingdom
Sukhbinder Kumar
Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
Shihab Shamma
The Institute for Systems Research, University of Maryland, College Park, United States; Département d’études cognitive, Ecole Normale Supérieure, Paris, France
Timothy D Griffiths
Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
In contrast to the complex acoustic environments we encounter everyday, most studies of auditory segregation have used relatively simple signals. Here, we synthesized a new stimulus to examine the detection of coherent patterns (‘figures’) from overlapping ‘background’ signals. In a series of experiments, we demonstrate that human listeners are remarkably sensitive to the emergence of such figures and can tolerate a variety of spectral and temporal perturbations. This robust behavior is consistent with the existence of automatic auditory segregation mechanisms that are highly sensitive to correlations across frequency and time. The observed behavior cannot be explained purely on the basis of adaptation-based models used to explain the segregation of deterministic narrowband signals. We show that the present results are consistent with the predictions of a model of auditory perceptual organization based on temporal coherence. Our data thus support a role for temporal coherence as an organizational principle underlying auditory segregation.