Graph embeddings learn a mapping from a network to a vector space, while preserving relevant network properties.
( Image credit: GAT )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Recent interest in graph embedding methods has focused on learning a single representation for each node in the graph.
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
Ranked #2 on Node Classification on Wiki-Vote
Graph embedding methods produce unsupervised node features from graphs that can then be used for a variety of machine learning tasks.
Ranked #1 on Link Prediction on YouTube (Macro F1 metric)
Implementation and experiments of graph embedding algorithms. deep walk, LINE(Large-scale Information Network Embedding), node2vec, SDNE(Structural Deep Network Embedding), struc2vec
Ranked #2 on Node Classification on Wikipedia
This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction.
Ranked #5 on Node Classification on Wikipedia
In particular, we develop a novel graph embedding algorithm, High-Order Proximity preserved Embedding (HOPE for short), which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity.
We present Karate Club a Python framework combining more than 30 state-of-the-art graph mining algorithms which can solve unsupervised machine learning tasks.
Specifically, it complements either the edge label information or the structural information which Graph2vec misses with the embeddings of the line graphs.