Knowledge Graph Completion
206 papers with code • 7 benchmarks • 16 datasets
Knowledge graphs $G$ are represented as a collection of triples $\{(h, r, t)\}\subseteq E\times R\times E$, where $E$ and $R$ are the entity set and relation set. The task of Knowledge Graph Completion is to either predict unseen relations $r$ between two existing entities: $(h, ?, t)$ or predict the tail entity $t$ given the head entity and the query relation: $(h, r, ?)$.
Libraries
Use these libraries to find Knowledge Graph Completion models and implementationsDatasets
Subtasks
Latest papers
Contextualization Distillation from Large Language Model for Knowledge Graph Completion
While textual information significantly enhances the performance of pre-trained language models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing corpora collected from Wikipedia articles or synsets definitions often limits the potential of PLM-based KGC models.
Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion
This paper proposes a progressive distillation method based on masked generation features for KGC task, aiming to significantly reduce the complexity of pre-trained models.
Prompting Disentangled Embeddings for Knowledge Graph Completion with Pre-trained Language Model
Accordingly, we propose a new KGC method named PDKGC with two prompts -- a hard task prompt which is to adapt the KGC task to the PLM pre-training task of token prediction, and a disentangled structure prompt which learns disentangled graph representation so as to enable the PLM to combine more relevant structure knowledge with the text information.
Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs
Recent work in Natural Language Processing and Computer Vision has been using textual information -- e. g., entity names and descriptions -- available in knowledge graphs to ground neural models to high-quality structured data.
Better Together: Enhancing Generative Knowledge Graph Completion with Language Models and Neighborhood Information
In this study, we propose to include node neighborhoods as additional information to improve KGC methods based on language models.
Distance-Based Propagation for Efficient Knowledge Graph Reasoning
A new class of methods have been proposed to tackle this problem by aggregating path information.
Re-Temp: Relation-Aware Temporal Representation Learning for Temporal Knowledge Graph Completion
Temporal Knowledge Graph Completion (TKGC) under the extrapolation setting aims to predict the missing entity from a fact in the future, posing a challenge that aligns more closely with real-world prediction problems.
Negative Sampling with Adaptive Denoising Mixup for Knowledge Graph Embedding
Most existing negative sampling methods assume that non-existent triples with high scores are high-quality negative triples.
Can Text-based Knowledge Graph Completion Benefit From Zero-Shot Large Language Models?
We found that (1) without fine-tuning, LLMs have the capability to further improve the quality of entity text descriptions.
Making Large Language Models Perform Better in Knowledge Graph Completion
In this paper, we explore methods to incorporate structural information into the LLMs, with the overarching goal of facilitating structure-aware reasoning.