CAAI Transactions on Intelligence Technology (Dec 2023)

CoLM2S: Contrastive self‐supervised learning on attributed multiplex graph network with multi‐scale information

  • Beibei Han,
  • Yingmei Wei,
  • Qingyong Wang,
  • Shanshan Wan

DOI
https://doi.org/10.1049/cit2.12168
Journal volume & issue
Vol. 8, no. 4
pp. 1464 – 1479

Abstract

Read online

Abstract Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently. However, there are still two challenges. First, most of the real‐word system are multiple relations, where entities are linked by different types of relations, and each relation is a view of the graph network. Second, the rich multi‐scale information (structure‐level and feature‐level) of the graph network can be seen as self‐supervised signals, which are not fully exploited. A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale (named CoLM 2 S) information is presented in this study. It mainly contains two components: intra‐relation contrast learning and inter‐relation contrastive learning. Specifically, the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information (CoLMS) framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level self‐supervised signals is introduced first. The structure‐level information includes the edge structure and sub‐graph structure, and the feature‐level information represents the output of different graph convolutional layer. Second, according to the consensus assumption among inter‐relations, the CoLM2S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding. The proposed method can fully distil the graph information. Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods, and it outperforms existing competitive baselines.

Keywords