2 code implementations • 22 Jun 2022 • Francesco Di Giovanni, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, Michael M. Bronstein
We do so by showing that linear graph convolutions with symmetric weights minimize a multi-particle energy that generalizes the Dirichlet energy; in this setting, the weight matrices induce edge-wise attraction (repulsion) through their positive (negative) eigenvalues, thereby controlling whether the features are being smoothed or sharpened.
1 code implementation • 21 May 2022 • Sourya Basu, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, Taco Cohen
Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research.
1 code implementation • 4 Feb 2022 • T. Konstantin Rusch, Benjamin P. Chamberlain, James Rowbottom, Siddhartha Mishra, Michael M. Bronstein
This demonstrates that the proposed framework mitigates the oversmoothing problem.
1 code implementation • NeurIPS 2021 • Benjamin Paul Chamberlain, James Rowbottom, Davide Eynard, Francesco Di Giovanni, Xiaowen Dong, Michael M Bronstein
We propose a novel class of graph neural networks based on the discretised Beltrami flow, a non-Euclidean diffusion PDE.
1 code implementation • NeurIPS Workshop DLDE 2021 • Benjamin Paul Chamberlain, James Rowbottom, Maria Gorinova, Stefan Webb, Emanuele Rossi, Michael M. Bronstein
We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.