Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception
Enea Ceolini,
Jens Hjortkjær,
Daniel D.E. Wong,
James O’Sullivan,
Vinay S. Raghavan,
Jose Herrero,
Ashesh D. Mehta,
Shih-Chii Liu,
Nima Mesgarani
Affiliations
Enea Ceolini
Corresponding authors.; University of Zürich and ETH Zürich, Institute of Neuroinformatics, Switzerland
Jens Hjortkjær
Department of Health Technology, Danmarks Tekniske Universitet DTU, Kongens Lyngby, Denmark; Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Hvidovre, Denmark
Daniel D.E. Wong
Laboratoire des Systèmes Perceptifs, CNRS, UMR 8248, Paris, France; Département d’Études Cognitives, École Normale Supérieure, PSL Research University, Paris, France
James O’Sullivan
Department of Electrical Engineering, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
Vinay S. Raghavan
Department of Electrical Engineering, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
Jose Herrero
Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
Ashesh D. Mehta
Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
Shih-Chii Liu
University of Zürich and ETH Zürich, Institute of Neuroinformatics, Switzerland
Nima Mesgarani
Corresponding authors.; Department of Electrical Engineering, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)11 BISS: brain-informed speech separation. in which the information about the attended speech, as decoded from the subject’s brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.