Towards Deeper Graph Neural Networks

18 Jul 2020  ·  Meng Liu, Hongyang Gao, Shuiwang Ji ·

Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states that repeated propagation makes node representations of different classes indistinguishable. In this work, we study this observation systematically and develop new insights towards deeper graph neural networks. First, we provide a systematical analysis on this issue and argue that the key factor compromising the performance significantly is the entanglement of representation transformation and propagation in current graph convolution operations. After decoupling these two operations, deeper graph neural networks can be used to learn graph node representations from larger receptive fields. We further provide a theoretical analysis of the above observation when building very deep models, which can serve as a rigorous and gentle description of the over-smoothing issue. Based on our theoretical and empirical analysis, we propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields. A set of experiments on citation, co-authorship, and co-purchase datasets have confirmed our analysis and insights and demonstrated the superiority of our proposed methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification AMZ Computers DAGNN (Ours) Accuracy 84.5 ± 1.2 # 2
Node Classification AMZ Photo DAGNN (Ours) Accuracy 92% # 10
Node Classification CiteSeer with Public Split: fixed 20 nodes per class DAGNN (Ours) Accuracy 73.3 ± 0.6 # 17
Node Classification Coauthor CS DAGNN (Ours) Accuracy 92.8% # 13
Node Classification Coauthor Physics DAGNN (Ours) Accuracy 94 # 8
Node Classification Cora with Public Split: fixed 20 nodes per class DAGNN (Ours) Accuracy 84.4 ± 0.5 # 7
Node Property Prediction ogbn-arxiv DAGNN Test Accuracy 0.7209 ± 0.0025 # 63
Validation Accuracy 0.7290 ± 0.0011 # 68
Number of params 43857 # 72
Ext. data No # 1
Node Classification PubMed with Public Split: fixed 20 nodes per class DAGNN (Ours) Accuracy 80.5 ± 0.5 # 11

Methods