Sensors (Nov 2024)
Explicit and Implicit Feature Contrastive Learning Model for Knowledge Graph Link Prediction
Abstract
Knowledge graph link prediction is crucial for constructing triples in knowledge graphs, which aim to infer whether there is a relation between the entities. Recently, graph neural networks and contrastive learning have demonstrated superior performance compared with traditional translation-based models; they successfully extracted common features through explicit linking between entities. However, the implicit associations between entities without a linking relationship are ignored, which impedes the model from capturing distant but semantically rich entities. In addition, directly applying contrastive learning based on random node dropout to link prediction tasks, or limiting it to triplet-level, leads to constrained model performance. To address these challenges, we design an implicit feature extraction module that utilizes the clustering characteristics of latent vector space to find entities with potential associations and enrich entity representations by mining similar semantic features from the conceptual level. Meanwhile, the subgraph mechanism is introduced to preserve the structural information of explicitly connected entities. Implicit semantic features and explicit structural features serve as complementary information to provide high-quality self-supervised signals. Experiments are conducted on three benchmark knowledge graph datasets. The results validate that our model outperforms the state-of-the-art baselines in link prediction tasks.
Keywords