Graph Representation Learning
375 papers with code • 1 benchmarks • 6 datasets
The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.
Libraries
Use these libraries to find Graph Representation Learning models and implementationsMost implemented papers
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.
Do Transformers Really Perform Bad for Graph Representation?
Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.
Effect of Choosing Loss Function when Using T-batching for Representation Learning on Dynamic Networks
These findings underscore the efficacy of the proposed loss functions in dynamic network modeling.
Hyperbolic Neural Networks
However, the representational power of hyperbolic geometry is not yet on par with Euclidean geometry, mostly because of the absence of corresponding hyperbolic neural network layers.
Deep Graph Contrastive Representation Learning
Moreover, our unsupervised method even surpasses its supervised counterparts on transductive tasks, demonstrating its great potential in real-world applications.
Towards Deeper Graph Neural Networks
Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning
Instead of learning on the complete input graph data, with a novel data augmentation strategy, \textsc{Subg-Con} learns node representations through a contrastive loss defined based on subgraphs sampled from the original graph instead.
Large-Scale Representation Learning on Graphs via Bootstrapping
To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input.
Towards a Unified Framework for Fair and Stable Graph Representation Learning
In this work, we establish a key connection between counterfactual fairness and stability and leverage it to propose a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations.
Structure-Aware Transformer for Graph Representation Learning
Here, we show that the node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them.