Revealing the spatiotemporal brain dynamics of covert speech compared with overt speech: A simultaneous EEG-fMRI study
Wei Zhang,
Muyun Jiang,
Kok Ann Colin Teo,
Raghavan Bhuvanakantham,
LaiGuan Fong,
Wei Khang Jeremy Sim,
Zhiwei Guo,
Chuan Huat Vince Foo,
Rong Hui Jonathan Chua,
Parasuraman Padmanabhan,
Victoria Leong,
Jia Lu,
Balázs Gulyás,
Cuntai Guan
Affiliations
Wei Zhang
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
Muyun Jiang
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Kok Ann Colin Teo
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore; Division of Neurosurgery, National University Health System, Singapore
Raghavan Bhuvanakantham
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
LaiGuan Fong
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Chuan Huat Vince Foo
DSO National Laboratories, Singapore
Rong Hui Jonathan Chua
DSO National Laboratories, Singapore
Parasuraman Padmanabhan
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
Victoria Leong
Division of Psychology, Nanyang Technological University, Singapore; Department of Pediatrics, University of Cambridge, United Kingdom
Jia Lu
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; DSO National Laboratories, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
Balázs Gulyás
Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Corresponding authors at: Director of Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore.
Cuntai Guan
School of Computer Science and Engineering, Nanyang Technological University, Singapore; Corresponding authors at: School of Computer Science and Engineering, Nanyang Technological University, Singapore.
Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.