449 papers with code • 1 benchmarks • 10 datasets
Graph embeddings learn a mapping from a network to a vector space, while preserving relevant network properties.
( Image credit: GAT )
We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links.
HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy.
The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error.
The dominant paradigm for relation prediction in knowledge graphs involves learning and operating on latent representations (i. e., embeddings) of entities and relations.
Recent works on representation learning for graph structured data predominantly focus on learning distributed representations of graph substructures such as nodes and subgraphs.
Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding.