Knowledge Base Completion
64 papers with code • 0 benchmarks • 2 datasets
Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
Benchmarks
These leaderboards are used to track progress in Knowledge Base Completion
Latest papers
Pre-training and Diagnosing Knowledge Base Completion Models
The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i. e., knowledge bases where more than one copy of a real-world entity or relation may exist.
Knowledge Base Completion for Long-Tail Entities
To evaluate our method and various baselines, we introduce a novel dataset, called MALT, rooted in Wikidata.
Evaluating Language Models for Knowledge Base Completion
In a second step, we perform a human evaluation on predictions that are not yet in the KB, as only this provides real insights into the added value over existing KBs.
ZeroKBC: A Comprehensive Benchmark for Zero-Shot Knowledge Base Completion
However, there has been limited research on the zero-shot KBC settings, where we need to deal with unseen entities and relations that emerge in a constantly growing knowledge base.
Instance-based Learning for Knowledge Base Completion
In this paper, we propose a new method for knowledge base completion (KBC): instance-based learning (IBL).
mOKB6: A Multilingual Open Knowledge Base Completion Benchmark
Automated completion of open knowledge bases (Open KBs), which are constructed from triples of the form (subject phrase, relation phrase, object phrase), obtained via open information extraction (Open IE) system, are useful for discovering novel facts that may not be directly present in the text.
Robust and Efficient Imbalanced Positive-Unlabeled Learning with Self-supervision
Learning from positive and unlabeled (PU) data is a setting where the learner only has access to positive and unlabeled samples while having no information on negative examples.
Effective Few-Shot Named Entity Linking by Meta-Learning
In this paper, we endeavor to solve the problem of few-shot entity linking, which only requires a minimal amount of in-domain labeled data and is more practical in real situations.
Capacity and Bias of Learned Geometric Embeddings for Directed Graphs
While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs.
Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes
Hence, knowledge base completion (KBC) on temporal knowledge bases (TKB), where each statement \textit{may} be associated with a temporal scope, has attracted growing attention.