CAAI Transactions on Intelligence Technology (Apr 2024)

Attention‐based network embedding with higher‐order weights and node attributes

  • Xian Mo,
  • Binyuan Wan,
  • Rui Tang,
  • Junkai Ding,
  • Guangdi Liu

DOI
https://doi.org/10.1049/cit2.12215
Journal volume & issue
Vol. 9, no. 2
pp. 440 – 451

Abstract

Read online

Abstract Network embedding aspires to learn a low‐dimensional vector of each node in networks, which can apply to diverse data mining tasks. In real‐life, many networks include rich attributes and temporal information. However, most existing embedding approaches ignore either temporal information or network attributes. A self‐attention based architecture using higher‐order weights and node attributes for both static and temporal attributed network embedding is presented in this article. A random walk sampling algorithm based on higher‐order weights and node attributes to capture network topological features is presented. For static attributed networks, the algorithm incorporates first‐order to k‐order weights, and node attribute similarities into one weighted graph to preserve topological features of networks. For temporal attribute networks, the algorithm incorporates previous snapshots of networks containing first‐order to k‐order weights, and nodes attribute similarities into one weighted graph. In addition, the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight. Attribute features are then incorporated into topological features. Next, the authors adopt the most advanced architecture, Self‐Attention Networks, to learn node representations. Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state‐of‐the‐art baseline approaches.

Keywords