Scientific Reports (Sep 2021)

Distinct neural sources underlying visual word form processing as revealed by steady state visual evoked potentials (SSVEP)

  • Fang Wang,
  • Blair Kaneshiro,
  • C. Benjamin Strauber,
  • Lindsey Hasak,
  • Quynh Trang H. Nguyen,
  • Alexandra Yakovleva,
  • Vladimir Y. Vildavski,
  • Anthony M. Norcia,
  • Bruce D. McCandliss

DOI
https://doi.org/10.1038/s41598-021-95627-x
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 15

Abstract

Read online

Abstract EEG has been central to investigations of the time course of various neural functions underpinning visual word recognition. Recently the steady-state visual evoked potential (SSVEP) paradigm has been increasingly adopted for word recognition studies due to its high signal-to-noise ratio. Such studies, however, have been typically framed around a single source in the left ventral occipitotemporal cortex (vOT). Here, we combine SSVEP recorded from 16 adult native English speakers with a data-driven spatial filtering approach—Reliable Components Analysis (RCA)—to elucidate distinct functional sources with overlapping yet separable time courses and topographies that emerge when contrasting words with pseudofont visual controls. The first component topography was maximal over left vOT regions with a shorter latency (approximately 180 ms). A second component was maximal over more dorsal parietal regions with a longer latency (approximately 260 ms). Both components consistently emerged across a range of parameter manipulations including changes in the spatial overlap between successive stimuli, and changes in both base and deviation frequency. We then contrasted word-in-nonword and word-in-pseudoword to test the hierarchical processing mechanisms underlying visual word recognition. Results suggest that these hierarchical contrasts fail to evoke a unitary component that might be reasonably associated with lexical access.