no code implementations • 6 Jun 2023 • Francesco Di Giovanni, T. Konstantin Rusch, Michael M. Bronstein, Andreea Deac, Marc Lackenby, Siddhartha Mishra, Petar Veličković
In this paper, we provide a rigorous analysis to determine which function classes of node features can be learned by an MPNN of a given capacity.
no code implementations • 20 Mar 2023 • T. Konstantin Rusch, Michael M. Bronstein, Siddhartha Mishra
Node features of graph neural networks (GNNs) tend to become more similar with the increase of the network depth.
no code implementations • 7 Feb 2023 • Léonard Equer, T. Konstantin Rusch, Siddhartha Mishra
We propose a novel multi-scale message passing neural network algorithm for learning the solutions of time-dependent PDEs.
1 code implementation • 2 Oct 2022 • T. Konstantin Rusch, Benjamin P. Chamberlain, Michael W. Mahoney, Michael M. Bronstein, Siddhartha Mishra
We present Gradient Gating (G$^2$), a novel framework for improving the performance of Graph Neural Networks (GNNs).
Ranked #3 on Node Classification on arXiv-year
1 code implementation • 4 Feb 2022 • T. Konstantin Rusch, Benjamin P. Chamberlain, James Rowbottom, Siddhartha Mishra, Michael M. Bronstein
This demonstrates that the proposed framework mitigates the oversmoothing problem.
1 code implementation • ICLR 2022 • T. Konstantin Rusch, Siddhartha Mishra, N. Benjamin Erichson, Michael W. Mahoney
We propose a novel method called Long Expressive Memory (LEM) for learning long-term sequential dependencies.
Ranked #1 on Time Series Classification on EigenWorms
1 code implementation • 9 Mar 2021 • T. Konstantin Rusch, Siddhartha Mishra
The design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-time dependencies is very challenging on account of the exploding and vanishing gradient problem.
Ranked #2 on Time Series Classification on EigenWorms
1 code implementation • ICLR 2021 • T. Konstantin Rusch, Siddhartha Mishra
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators.
1 code implementation • 26 May 2020 • Siddhartha Mishra, T. Konstantin Rusch
We propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set.