Representation Learning on Graphs with Jumping Knowledge Networks

Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of "neighboring" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.

PDF Abstract ICML 2018 PDF ICML 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Node Property Prediction ogbn-arxiv JKNet (GCN-based) Test Accuracy 0.7219 ± 0.0021 # 59
Validation Accuracy 0.7335 ± 0.0007 # 59
Number of params 89000 # 71
Ext. data No # 1
Node Classification PPI JK-LSTM F1 97.6 # 14

Methods