Transactions of the Association for Computational Linguistics (Jan 2023)

Transparency Helps Reveal When Language Models Learn Meaning

  • Zhaofeng Wu,
  • William Merrill,
  • Hao Peng,
  • Iz Beltagy,
  • Noah A. Smith

DOI
https://doi.org/10.1162/tacl_a_00565
Journal volume & issue
Vol. 11
pp. 617 – 634

Abstract

Read online

AbstractMany current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong transparency), both autoregressive and masked language models successfully learn to emulate semantic relations between expressions. However, when denotations are changed to be context-dependent with the language otherwise unmodified, this ability degrades. Turning to natural language, our experiments with a specific phenomenon—referential opacity—add to the growing body of evidence that current language models do not represent natural language semantics well. We show this failure relates to the context-dependent nature of natural language form-meaning mappings.