Frontiers in Neuroscience (Jul 2018)

Modeling Semantic Encoding in a Common Neural Representational Space

  • Cara E. Van Uden,
  • Samuel A. Nastase,
  • Samuel A. Nastase,
  • Andrew C. Connolly,
  • Ma Feilong,
  • Isabella Hansen,
  • M. Ida Gobbini,
  • M. Ida Gobbini,
  • James V. Haxby

DOI
https://doi.org/10.3389/fnins.2018.00437
Journal volume & issue
Vol. 12

Abstract

Read online

Encoding models for mapping voxelwise semantic tuning are typically estimated separately for each individual, limiting their generalizability. In the current report, we develop a method for estimating semantic encoding models that generalize across individuals. Functional MRI was used to measure brain responses while participants freely viewed a naturalistic audiovisual movie. Word embeddings capturing agent-, action-, object-, and scene-related semantic content were assigned to each imaging volume based on an annotation of the film. We constructed both conventional within-subject semantic encoding models and between-subject models where the model was trained on a subset of participants and validated on a left-out participant. Between-subject models were trained using cortical surface-based anatomical normalization or surface-based whole-cortex hyperalignment. We used hyperalignment to project group data into an individual’s unique anatomical space via a common representational space, thus leveraging a larger volume of data for out-of-sample prediction while preserving the individual’s fine-grained functional–anatomical idiosyncrasies. Our findings demonstrate that anatomical normalization degrades the spatial specificity of between-subject encoding models relative to within-subject models. Hyperalignment, on the other hand, recovers the spatial specificity of semantic tuning lost during anatomical normalization, and yields model performance exceeding that of within-subject models.

Keywords