Knowledge Graph Completion
201 papers with code • 7 benchmarks • 16 datasets
Knowledge graphs $G$ are represented as a collection of triples $\{(h, r, t)\}\subseteq E\times R\times E$, where $E$ and $R$ are the entity set and relation set. The task of Knowledge Graph Completion is to either predict unseen relations $r$ between two existing entities: $(h, ?, t)$ or predict the tail entity $t$ given the head entity and the query relation: $(h, r, ?)$.
Libraries
Use these libraries to find Knowledge Graph Completion models and implementationsDatasets
Subtasks
Latest papers with no code
Hyper-CL: Conditioning Sentence Representations with Hypernetworks
While the introduction of contrastive learning frameworks in sentence representation learning has significantly contributed to advancements in the field, it still remains unclear whether state-of-the-art sentence embeddings can capture the fine-grained semantics of sentences, particularly when conditioned on specific perspectives.
HDReason: Algorithm-Hardware Codesign for Hyperdimensional Knowledge Graph Reasoning
When conducting cross-models and cross-platforms comparison, HDReason yields an average 4. 2x higher performance and 3. 4x better energy efficiency with similar accuracy versus the state-of-the-art FPGA-based GCN training platform.
Uncertainty-Aware Relational Graph Neural Network for Few-Shot Knowledge Graph Completion
Uncertainty representation is first designed for estimating the uncertainty scope of the entity pairs after transferring feature representations into a Gaussian distribution.
Temporal Knowledge Graph Completion with Time-sensitive Relations in Hypercomplex Space
Temporal knowledge graph completion (TKGC) aims to fill in missing facts within a given temporal knowledge graph at a specific time.
VN Network: Embedding Newly Emerging Entities with Virtual Neighbors
To address this issue, recent works apply the graph neural network on the existing neighbors of the unseen entities.
Knowledge Graph Assisted Automatic Sports News Writing
In this paper, we present a novel method for automatically generating sports news, which employs a unique algorithm that extracts pivotal moments from live text broadcasts and uses them to create an initial draft of the news.
EntailE: Introducing Textual Entailment in Commonsense Knowledge Graph Completion
In this paper, we propose to adopt textual entailment to find implicit entailment relations between CSKG nodes, to effectively densify the subgraph connecting nodes within the same conceptual class, which indicates a similar level of plausibility.
Rendering Graphs for Graph Reasoning in Multimodal Large Language Models
In this paper, we take the first step in incorporating visual information into graph reasoning tasks and propose a new benchmark GITQA, where each sample is a tuple (graph, image, textual description).
Contextualization Distillation from Large Language Model for Knowledge Graph Completion
While textual information significantly enhances the performance of pre-trained language models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing corpora collected from Wikipedia articles or synsets definitions often limits the potential of PLM-based KGC models.
Are We Wasting Time? A Fast, Accurate Performance Evaluation Framework for Knowledge Graph Link Predictors
First, we empirically find and theoretically motivate why sampling uniformly at random vastly overestimates the ranking performance of a method.