Walking Out of the Weisfeiler Leman Hierarchy: Graph Learning Beyond Message Passing

17 Feb 2021  ·  Jan Tönshoff, Martin Ritzert, Hinrikus Wolf, Martin Grohe ·

We propose CRaWl, a novel neural network architecture for graph learning. Like graph neural networks, CRaWl layers update node features on a graph and thus can freely be combined or interleaved with GNN layers. Yet CRaWl operates fundamentally different from message passing graph neural networks. CRaWl layers extract and aggregate information on subgraphs appearing along random walks through a graph using 1D Convolutions. Thereby it detects long range interactions and computes non-local features. As the theoretical basis for our approach, we prove a theorem stating that the expressiveness of CRaWl is incomparable with that of the Weisfeiler Leman algorithm and hence with graph neural networks. That is, there are functions expressible by CRaWl, but not by GNNs and vice versa. This result extends to higher levels of the Weisfeiler Leman hierarchy and thus to higher-order GNNs. Empirically, we show that CRaWl matches state-of-the-art GNN architectures across a multitude of benchmark datasets for classification and regression on graphs.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Graph Property Prediction ogbg-molpcba CRaWl Test AP 0.2986 ± 0.0025 # 9
Validation AP 0.3075 ± 0.0020 # 8
Number of params 6115728 # 11
Ext. data No # 1
Graph Classification REDDIT-B CRaWl Accuracy 93.15 # 1
Graph Regression ZINC CRaWl+VN MAE 0.088 # 10
Graph Regression ZINC CRaWl MAE 0.101 # 13
Graph Regression ZINC-500k CRaWl+VN MAE 0.088 # 10
Graph Regression ZINC-500k CRaWl MAE 0.101 # 14


No methods listed for this paper. Add relevant methods here