MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing

Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships. To address this weakness, we propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. Mixhop requires no additional memory or computational complexity, and outperforms on challenging baselines. In addition, we propose sparsity regularization that allows us to visualize how the network prioritizes neighborhood information across different graph datasets. Our analysis of the learned architectures reveals that neighborhood mixing varies per datasets.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Node Classification Citeseer MixHop Accuracy 71.4% # 50
Training Split 20 per node # 1
Validation YES # 1
Node Classification Cora MixHop Accuracy 81.9% # 54
Training Split 20 per node # 1
Validation YES # 1
Node Classification Pubmed MixHop Accuracy 80.8% # 27
Training Split 20 per node # 1
Validation YES # 1

Methods


No methods listed for this paper. Add relevant methods here