IEEE Access (Jan 2023)

ezLDA: Efficient and Scalable LDA on GPUs

  • Shilong Wang,
  • Hang Liu,
  • Anil Gaihre,
  • Hengyong Yu

DOI
https://doi.org/10.1109/ACCESS.2023.3315239
Journal volume & issue
Vol. 11
pp. 100165 – 100179

Abstract

Read online

Latent Dirichlet Allocation (LDA) is a statistical approach for topic modeling with a wide range of applications. Attracted by the exceptional computing and memory throughput capabilities, this work introduces ezLDA which achieves efficient and scalable LDA training on GPUs with the following three contributions: First, ezLDA introduces three-branch sampling method which takes advantage of the convergence heterogeneity of various tokens to reduce the redundant sampling task. Second, to enable sparsity-aware format for both D and W on GPUs with fast sampling and updating, we introduce hybrid format for W along with corresponding token partition to T and inverted index designs. Third, we design a hierarchical workload balancing solution to address the extremely skewed workload imbalance problem on GPU and scale ezLDA across multiple GPUs. Taken together, ezLDA achieves superior performance over the state-of-the-art attempts with lower memory consumption.

Keywords