Graph Autoencoder for Graph Compression and Representation Learning

We consider the problem of graph data compression and representation. Recent developments in graph neural networks (GNNs) focus on generalizing convolutional neural networks (CNNs) to graph data, which includes redesigning convolution and pooling operations for graphs. However, few methods focus on effective graph compression to obtain a smaller graph, which can reconstruct the original full graph with less storage and can provide useful latent representations to improve downstream task performance. To fill this gap, we propose Multi-kernel Inductive Attention Graph Autoencoder (MIAGAE), which, instead of compressing nodes/edges separately, utilizes the node similarity and graph structure to compress all nodes and edges as a whole. Similarity attention graph pooling selects the most representative nodes with the most information by using the similarity and topology among nodes. Our multi-kernel Inductive-Convolution layer can focus on different aspects and learn more general node representations in evolving graphs. We demonstrate that MIAGAE outperforms state-of-the-art methods for graph compression and few-shot graph classification, with superior graph representation learning.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods