Directional Graph Networks

The lack of anisotropic kernels in graph neural networks (GNNs) strongly limits their expressiveness, contributing to well-known issues such as over-smoothing. To overcome this limitation, we propose the first globally consistent anisotropic kernels for GNNs, allowing for graph convolutions that are defined according to topologicaly-derived directional flows. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then, we propose the use of the Laplacian eigenvectors as such vector field. We show that the method generalizes CNNs on an $n$-dimensional grid and is provably more discriminative than standard GNNs regarding the Weisfeiler-Lehman 1-WL test. We evaluate our method on different standard benchmarks and see a relative error reduction of 8% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset, and a relative increase in precision of 1.6% on the MolPCBA dataset. An important outcome of this work is that it enables graph networks to embed directions in an unsupervised way, thus allowing a better representation of the anisotropic features in different physical or biological problems.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Graph Classification CIFAR10 100k DGN Accuracy (%) 72.84 # 5
Graph Property Prediction ogbg-molhiv DGN Test ROC-AUC 0.7970 ± 0.0097 # 20
Validation ROC-AUC 0.8470 ± 0.0047 # 5
Number of params 114065 # 34
Ext. data No # 1
Graph Property Prediction ogbg-molpcba DGN Test AP 0.2885 ± 0.0030 # 19
Validation AP 0.2970 ± 0.0021 # 18
Number of params 6732696 # 10
Ext. data No # 1

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Node Classification PATTERN 100k DGN Accuracy (%) 86.680 # 2

Methods


No methods listed for this paper. Add relevant methods here