IEEE Access (Jan 2019)

Learnable Markov Chain Monte Carlo Sampling Methods for Lattice Gaussian Distribution

  • Zheng Wang,
  • Shanxiang Lyu,
  • Ling Liu

DOI
https://doi.org/10.1109/ACCESS.2019.2925530
Journal volume & issue
Vol. 7
pp. 87494 – 87503

Abstract

Read online

As a key ingredient of machine learning and artificial intelligence, the sampling algorithms with respect to lattice Gaussian distribution has emerged as an important problem in coding and decoding of wireless communications. In this paper, based on the conventional Gibbs sampling, the learnable delayed metropolis-within-Gibbs (LDMWG) sampling algorithm is proposed to improve the convergence performance, which fully takes the advantages of the acceptance mechanism from the metropolis-hastings (MH) algorithm in the Markov chain Monte Carlo (MCMC) methods. The rejected candidate by the acceptance mechanism is utilized as a learnable experience for the generation of a new candidate at the same Markov move. In this way, the overall probability of remaining the same state at the Markov chain is greatly reduced, which leads to an improved convergence performance in the sense of Peskun ordering. Moreover, in order to reduce the complexity cost during the Markov mixing, a symmetric sampling structure which greatly simplified the sampling operation is further introduced and the symmetric learnable delayed metropolis-within-Gibbs (SLDMWG) sampling algorithm is given. Finally, the simulation results based on multi-input multi-output (MIMO) detections are presented to confirm the convergence gain and the complexity reduction brought by the proposed sampling schemes.

Keywords