Explicit and Implicit Feature Contrastive Learning Model for Knowledge Graph Link Prediction.

阅读:4
作者:Yuan Xu, Wang Weihe, Gao Buyun, Zhao Liang, Ma Ruixin, Ding Feng
Knowledge graph link prediction is crucial for constructing triples in knowledge graphs, which aim to infer whether there is a relation between the entities. Recently, graph neural networks and contrastive learning have demonstrated superior performance compared with traditional translation-based models; they successfully extracted common features through explicit linking between entities. However, the implicit associations between entities without a linking relationship are ignored, which impedes the model from capturing distant but semantically rich entities. In addition, directly applying contrastive learning based on random node dropout to link prediction tasks, or limiting it to triplet-level, leads to constrained model performance. To address these challenges, we design an implicit feature extraction module that utilizes the clustering characteristics of latent vector space to find entities with potential associations and enrich entity representations by mining similar semantic features from the conceptual level. Meanwhile, the subgraph mechanism is introduced to preserve the structural information of explicitly connected entities. Implicit semantic features and explicit structural features serve as complementary information to provide high-quality self-supervised signals. Experiments are conducted on three benchmark knowledge graph datasets. The results validate that our model outperforms the state-of-the-art baselines in link prediction tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。