A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking

Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs). Due to the nature of evolving graph structures into the training process, vanilla GNNs usually fail to scale up, limited by the GPU memory space. Up to now, though numerous scalable GNN architectures have been proposed, we still lack a comprehensive survey and fair benchmark of this reservoir to find the rationale for designing scalable GNNs. To this end, we first systematically formulate the representative methods of large-scale graph training into several branches and further establish a fair and consistent benchmark for them by a greedy hyperparameter searching. In addition, regarding efficiency, we theoretically evaluate the time and space complexity of various branches and empirically compare them w.r.t GPU memory usage, throughput, and convergence. Furthermore, We analyze the pros and cons for various branches of scalable GNNs and then present a new ensembling training manner, named EnGCN, to address the existing issues. Our code is available at https://github.com/VITA-Group/Large_Scale_GCN_Benchmarking.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification Flickr EnGCN (Duan et al., 2022) Accuracy 0.562 # 4
Node Property Prediction ogbn-products EnGCN Test Accuracy 0.8798 ± 0.0004 # 2
Validation Accuracy 0.9241 ± 0.0003 # 30
Number of params 653918 # 36
Ext. data No # 1
Node Classification Reddit EnGCN Accuracy 96.65% # 7

Methods


No methods listed for this paper. Add relevant methods here