Almost all of existing KGC research is applicable to only one KG at a time, and in one language only.
Temporal knowledge bases associate relational (s, r, o) triples with a set of times (or a single time instant) when the relation is valid.
Ranked #1 on Link Prediction on Wikidata12k
Most existing methods train with a small number of negative samples for each positive instance in these datasets to save computational costs.
State-of-the-art knowledge base completion (KBC) models predict a score for every known or unknown fact via a latent factorization over entity and relation embeddings.
If not, what characteristics of a dataset determine the performance of MF and TF models?