Computational Linguistics (Sep 2018)

Feature-Based Decipherment for Machine Translation

  • Iftekhar Naim,
  • Parker Riley,
  • Daniel Gildea

DOI
https://doi.org/10.1162/coli_a_00326
Journal volume & issue
Vol. 44, no. 3
pp. 525 – 546

Abstract

Read online

Orthographic similarities across languages provide a strong signal for unsupervised probabilistic transduction (decipherment) for closely related language pairs. The existing decipherment models, however, are not well suited for exploiting these orthographic similarities. We propose a log-linear model with latent variables that incorporates orthographic similarity features. Maximum likelihood training is computationally expensive for the proposed log-linear model. To address this challenge, we perform approximate inference via Markov chain Monte Carlo sampling and contrastive divergence. Our results show that the proposed log-linear model with contrastive divergence outperforms the existing generative decipherment models by exploiting the orthographic features. The model both scales to large vocabularies and preserves accuracy in low- and no-resource contexts.