123 papers with code • 1 benchmarks • 4 datasets

**Graph Representation Learning** is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.

The proposed framework is evaluated with real-world incident data collected from a large-scale online service system of Huawei Cloud.

Recently, a number of SSL methods for graph representation learning have achieved performance comparable to SOTA semi-supervised GNNs.

Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph.

In practice, graph embedding (graph representation learning) attempts to learn a lower-dimensional representation vector for each node or the whole graph while maintaining the most basic information of graph.

Second, it learns a generic model for graph cascade tasks via self-supervised contrastive pre-training using both unlabeled and labeled data.

Our local2global approach proceeds by first dividing the input graph into overlapping subgraphs (or "patches") and training local representations for each patch independently.

We present a new dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning.

Ranked #1 on KG-to-Text Generation on WikiGraphs

In doing so, we demonstrate evidence of scalable self-supervised graph representation learning, and utility of very deep GNNs -- both very important open issues.

We present a novel learning-based approach to graph representations of road networks employing state-of-the-art graph convolutional neural networks.

Our proposed model consists of a graph convolutional network (GCN) encoder and a stochastic decoder, which are layer-wise connected by a hierarchical variational auto-encoder architecture.