Adaptive Sampling Towards Fast Graph Representation Learning

Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.

PDF Abstract NeurIPS 2018 PDF NeurIPS 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Node Classification Citeseer Full-supervised ASGCN Accuracy 79.66% # 2
Node Classification Cora AS-GCN Accuracy 87.44% ± 0.0034% # 17
Node Classification Cora Full-supervised ASGCN Accuracy 87.44±0.0034% # 4
Node Classification Pubmed Full-supervised ASGCN Accuracy 90.6% # 2
Node Classification Reddit ASGCN Accuracy 96.27% # 8

Methods


No methods listed for this paper. Add relevant methods here