Knowledge Base Completion

64 papers with code • 0 benchmarks • 2 datasets

Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.

Pre-training and Diagnosing Knowledge Base Completion Models

vid-koci/kbctransferlearning 27 Jan 2024

The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i. e., knowledge bases where more than one copy of a real-world entity or relation may exist.

14
27 Jan 2024

Knowledge Base Completion for Long-Tail Entities

tigerchen52/long_tail_kbc 30 Jun 2023

To evaluate our method and various baselines, we introduce a novel dataset, called MALT, rooted in Wikidata.

2
30 Jun 2023

Evaluating Language Models for Knowledge Base Completion

bveseli/lmsforkbc 20 Mar 2023

In a second step, we perform a human evaluation on predictions that are not yet in the KB, as only this provides real insights into the added value over existing KBs.

5
20 Mar 2023

ZeroKBC: A Comprehensive Benchmark for Zero-Shot Knowledge Base Completion

brickee/zerokbc 6 Dec 2022

However, there has been limited research on the zero-shot KBC settings, where we need to deal with unseen entities and relations that emerge in a constantly growing knowledge base.

4
06 Dec 2022

Instance-based Learning for Knowledge Base Completion

chenxran/instancebasedlearning 13 Nov 2022

In this paper, we propose a new method for knowledge base completion (KBC): instance-based learning (IBL).

8
13 Nov 2022

mOKB6: A Multilingual Open Knowledge Base Completion Benchmark

dair-iitd/mokb6 13 Nov 2022

Automated completion of open knowledge bases (Open KBs), which are constructed from triples of the form (subject phrase, relation phrase, object phrase), obtained via open information extraction (Open IE) system, are useful for discovering novel facts that may not be directly present in the text.

4
13 Nov 2022

Robust and Efficient Imbalanced Positive-Unlabeled Learning with Self-supervision

jschweisthal/impulses 6 Sep 2022

Learning from positive and unlabeled (PU) data is a setting where the learner only has access to positive and unlabeled samples while having no information on negative examples.

5
06 Sep 2022

Effective Few-Shot Named Entity Linking by Meta-Learning

leezythu/MetaBLINK 12 Jul 2022

In this paper, we endeavor to solve the problem of few-shot entity linking, which only requires a minimal amount of in-domain labeled data and is more practical in real situations.

4
12 Jul 2022

Capacity and Bias of Learned Geometric Embeddings for Directed Graphs

iesl/geometric_graph_embedding NeurIPS 2021

While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs.

11
01 Dec 2021

Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes

ling-cai/time2box 12 Nov 2021

Hence, knowledge base completion (KBC) on temporal knowledge bases (TKB), where each statement \textit{may} be associated with a temporal scope, has attracted growing attention.

4
12 Nov 2021