Nature Communications (Jun 2024)

Shared functional specialization in transformer-based language models and the human brain

  • Sreejan Kumar,
  • Theodore R. Sumers,
  • Takateru Yamakoshi,
  • Ariel Goldstein,
  • Uri Hasson,
  • Kenneth A. Norman,
  • Thomas L. Griffiths,
  • Robert D. Hawkins,
  • Samuel A. Nastase

DOI
https://doi.org/10.1038/s41467-024-49173-5
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 19

Abstract

Read online

Abstract When processing language, the brain is thought to deploy specialized computations to construct meaning from complex linguistic structures. Recently, artificial neural networks based on the Transformer architecture have revolutionized the field of natural language processing. Transformers integrate contextual information across words via structured circuit computations. Prior work has focused on the internal representations (“embeddings”) generated by these circuits. In this paper, we instead analyze the circuit computations directly: we deconstruct these computations into the functionally-specialized “transformations” that integrate contextual information across words. Using functional MRI data acquired while participants listened to naturalistic stories, we first verify that the transformations account for considerable variance in brain activity across the cortical language network. We then demonstrate that the emergent computations performed by individual, functionally-specialized “attention heads” differentially predict brain activity in specific cortical regions. These heads fall along gradients corresponding to different layers and context lengths in a low-dimensional cortical space.