IEEE Access (Jan 2022)

GDLL: A Scalable and Share Nothing Architecture Based Distributed Graph Neural Networks Framework

  • Duong Thi Thu Van,
  • Muhammad Numan Khan,
  • Tariq Habib Afridi,
  • Irfan Ullah,
  • Aftab Alam,
  • Young-Koo Lee

DOI
https://doi.org/10.1109/ACCESS.2022.3148126
Journal volume & issue
Vol. 10
pp. 21684 – 21700

Abstract

Read online

Deep learning has recently been shown to be effective in uncovering hidden patterns in non-Euclidean space, where data is represented as graphs with complex object relationships and interdependencies. Because of the implicit data dependence in the big graphs with millions of nodes and billions of edges, it is hard for industrial communities to exploit these methods to address real-world challenges at scale. The skewness property of big graphs, distributed file system performance penalty on small k-hop neighborhood subgraphs, and varying size of subgraph makes Graph Neural Networks (GNNs) training further challenging in a distributed environment using parameter servers. To address such issues, we propose a scalable, layered, fault-tolerance, and in-memory distributed computing-based graph neural network framework called Graph Distributed Learning Library (GDLL). The base layer utilizes an optimized distributed file system and a scalable graph data store to reduce the performance penalty. The second layer provides distributed graph processing using in-memory graph programming models while optimizing and hiding the underlying complexity of information complete subgraph computation. In the third layer, GNN modules are deployed on top of the first two layers for efficient distributed training using parameter servers. Finally, we evaluate and compare GDLL with the state-of-the-art solutions and outperform it significantly in terms of efficiency while maintaining similar GNN convergence.

Keywords