Graph embeddings learn a mapping from a network to a vector space, while preserving relevant network properties.
Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs.
This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction.
#5 best model for Node Classification on Wikipedia
Graphs, such as social networks, word co-occurrence networks, and communication networks, occur naturally in various real-world applications.
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
#3 best model for Document Classification on Cora
This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks.
The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error.
Deep learning on graphs has become a popular research topic with many applications.
We examine two fundamental tasks associated with graph representation learning: link prediction and node classification.
SOTA for Node Classification on Pubmed