Transactions of the Association for Computational Linguistics (Jan 2021)

Sparse, Dense, and Attentional Representations for Text Retrieval

  • Yi Luan,
  • Jacob Eisenstein,
  • Kristina Toutanova,
  • Michael Collins

DOI
https://doi.org/10.1162/tacl_a_00369
Journal volume & issue
Vol. 9
pp. 329 – 345

Abstract

Read online

AbstractDual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval.