no code implementations • 3 Jul 2023 • Zhengdao Chen
To characterize the function space explored by neural networks (NNs) is an important aspect of learning theory.
no code implementations • 21 Dec 2022 • Xinyi Wu, Zhengdao Chen, William Wang, Ali Jadbabaie
Oversmoothing is a central challenge of building more powerful Graph Neural Networks (GNNs).
no code implementations • 28 Oct 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna
To understand the training dynamics of neural networks (NNs), prior studies have considered the infinite-width mean-field (MF) limit of two-layer NN, establishing theoretical guarantees of its convergence under gradient flow training as well as its approximation and generalization capabilities.
no code implementations • 22 Apr 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna
We study the optimization of wide neural networks (NNs) via gradient flow (GF) in setups that allow feature learning while admitting non-asymptotic global convergence guarantees.
no code implementations • ICLR 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna
We study the optimization of over-parameterized shallow and multi-layer neural networks (NNs) in a regime that allows feature learning while admitting non-asymptotic global convergence guarantees.
1 code implementation • ICLR 2021 • Lei Chen, Zhengdao Chen, Joan Bruna
From the perspective of expressive power, this work compares multi-layer Graph Neural Networks (GNNs) with a simplified alternative that we call Graph-Augmented Multi-Layer Perceptrons (GA-MLPs), which first augments node features with certain multi-hop operators on the graph and then applies an MLP in a node-wise fashion.
no code implementations • NeurIPS 2020 • Zhengdao Chen, Grant M. Rotskoff, Joan Bruna, Eric Vanden-Eijnden
Furthermore, if the mean-field dynamics converges to a measure that interpolates the training data, we prove that the asymptotic deviation eventually vanishes in the CLT scaling.
1 code implementation • NeurIPS 2020 • Zhengdao Chen, Lei Chen, Soledad Villar, Joan Bruna
We also prove positive results for k-WL and k-IGNs as well as negative results for k-WL with a finite number of iterations.
1 code implementation • ICLR 2020 • Zhengdao Chen, Jianyu Zhang, Martin Arjovsky, Léon Bottou
We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algorithms that capture the dynamics of physical systems from observed trajectories.
1 code implementation • NeurIPS 2019 • Zhengdao Chen, Soledad Villar, Lei Chen, Joan Bruna
We further develop a framework of the expressive power of GNNs that incorporates both of these viewpoints using the language of sigma-algebra, through which we compare the expressive power of different types of GNNs together with other graph isomorphism tests.
Ranked #27 on Graph Regression on ZINC-500k
2 code implementations • ICLR 2018 • Zhengdao Chen, Xiang Li, Joan Bruna
This graph inference task can be recast as a node-wise graph classification problem, and, as such, computational detection thresholds can be translated in terms of learning within appropriate models.
4 code implementations • ICLR 2019 • Zhengdao Chen, Xiang Li, Joan Bruna
We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multi-class stochastic block models, which is believed to reach the computational threshold.
Ranked #1 on Community Detection on Amazon (Accuracy-NE metric, using extra training data)