45 papers with code • 0 benchmarks • 0 datasets
Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
Knowledge graph embedding has been an active research topic for knowledge base completion (KGC), with progressive improvement from the initial TransE, TransH, RotatE et al to the current state-of-the-art QuatE.
Ranked #2 on Link Prediction on WN18
While the success of pre-trained language models has largely eliminated the need for high-quality static word vectors in many NLP applications, such vectors continue to play an important role in tasks where words need to be modelled in the absence of linguistic context.
(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.
Moreover, we show theoretically that the difference between gradient rollback's influence approximation and the true influence on a model's behavior is smaller than known bounds on the stability of stochastic gradient descent.
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB).
Ranked #1 on Link Prediction on FB-AUTO
The computation graphs themselves then reflect the symmetries of the underlying data, similarly to the lifted graphical models.
Temporal knowledge bases associate relational (s, r, o) triples with a set of times (or a single time instant) when the relation is valid.
Ranked #1 on Link Prediction on Wikidata12k