Machine Learning and Knowledge Extraction (Jun 2022)

Benefits from Variational Regularization in Language Models

  • Cornelia Ferner,
  • Stefan Wegenkittl

DOI
https://doi.org/10.3390/make4020025
Journal volume & issue
Vol. 4, no. 2
pp. 542 – 555

Abstract

Read online

Representations from common pre-trained language models have been shown to suffer from the degeneration problem, i.e., they occupy a narrow cone in latent space. This problem can be addressed by enforcing isotropy in latent space. In analogy with variational autoencoders, we suggest applying a token-level variational loss to a Transformer architecture and optimizing the standard deviation of the prior distribution in the loss function as the model parameter to increase isotropy. The resulting latent space is complete and interpretable: any given point is a valid embedding and can be decoded into text again. This allows for text manipulations such as paraphrase generation directly in latent space. Surprisingly, features extracted at the sentence level also show competitive results on benchmark classification tasks.

Keywords