AI Open (Jan 2022)

On the distribution alignment of propagation in graph neural networks

  • Qinkai Zheng,
  • Xiao Xia,
  • Kun Zhang,
  • Evgeny Kharlamov,
  • Yuxiao Dong

Journal volume & issue
Vol. 3
pp. 218 – 228

Abstract

Read online

Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data. Most existing GNN studies have focused on designing different strategies to propagate information over the graph structures. After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it. Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of distribution alignment, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets. The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work as good as standard GNNs.11 Source code: https://github.com/THUDM/DistAlign-GNNs.

Keywords