You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 28 Oct 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna

To understand the training dynamics of neural networks (NNs), prior studies have considered the infinite-width mean-field (MF) limit of two-layer NN, establishing theoretical guarantees of its convergence under gradient flow training as well as its approximation and generalization capabilities.

no code implementations • 30 Sep 2022 • Michael S. Albergo, Eric Vanden-Eijnden

A simple generative model based on a continuous-time normalizing flow between any pair of base and target probability densities is proposed.

1 code implementation • 24 Jun 2022 • Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, Matthieu Wyart

It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data.

1 code implementation • 20 Jun 2022 • Yu Cao, Eric Vanden-Eijnden

On the theory side, we discuss how to tailor the velocity field to the target and establish general conditions under which the proposed estimator is a perfect estimator with zero-variance.

no code implementations • 9 Jun 2022 • Nicholas M. Boffi, Eric Vanden-Eijnden

The method of choice for integrating the time-dependent Fokker-Planck equation in high-dimension is to generate samples from the solution via integration of the associated stochastic differential equation.

no code implementations • 22 Apr 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna

We study the optimization of wide neural networks (NNs) via gradient flow (GF) in setups that allow feature learning while admitting non-asymptotic global convergence guarantees.

no code implementations • 2 Mar 2022 • Joan Bruna, Benjamin Peherstorfer, Eric Vanden-Eijnden

Deep neural networks have been shown to provide accurate function approximations in high dimensions.

no code implementations • ICLR 2022 • Zhengdao Chen, Eric Vanden-Eijnden, Joan Bruna

We study the optimization of over-parameterized shallow and multi-layer neural networks (NNs) in a regime that allows feature learning while admitting non-asymptotic global convergence guarantees.

no code implementations • ICML Workshop INNF 2021 • Marylou Gabrié, Grant M. Rotskoff, Eric Vanden-Eijnden

Normalizing flows can generate complex target distributions and thus show promise in many applications in Bayesian statistics as an alternative or complement to MCMC for sampling posteriors.

no code implementations • 11 Jul 2021 • Carles Domingo-Enrich, Alberto Bietti, Marylou Gabrié, Joan Bruna, Eric Vanden-Eijnden

In the feature-learning regime, this dual formulation justifies using a two time-scale gradient ascent-descent (GDA) training algorithm in which one updates concurrently the particles in the sample space and the neurons in the parameter space of the energy.

1 code implementation • 15 Apr 2021 • Carles Domingo-Enrich, Alberto Bietti, Eric Vanden-Eijnden, Joan Bruna

Energy-based models (EBMs) are a simple yet powerful framework for generative modeling.

no code implementations • 8 Mar 2021 • Tobias Grafke, Tobias Schäfer, Eric Vanden-Eijnden

Freidlin-Wentzell theory of large deviations can be used to compute the likelihood of extreme or rare events in stochastic dynamical systems via the solution of an optimization problem.

Statistical Mechanics Optimization and Control Probability Fluid Dynamics

no code implementations • NeurIPS 2020 • Zhengdao Chen, Grant M. Rotskoff, Joan Bruna, Eric Vanden-Eijnden

Furthermore, if the mean-field dynamics converges to a measure that interpolates the training data, we prove that the asymptotic deviation eventually vanishes in the CLT scaling.

1 code implementation • 11 Aug 2020 • Grant M. Rotskoff, Andrew R. Mitchell, Eric Vanden-Eijnden

Deep neural networks, when optimized with sufficient data, provide accurate representations of high-dimensional functions; in contrast, function approximation techniques that have predominated in scientific computing do not scale well with dimensionality.

no code implementations • NeurIPS 2020 • Stefano Sarao Mannelli, Eric Vanden-Eijnden, Lenka Zdeborová

We consider a teacher-student scenario where the teacher has the same structure as the student with a hidden layer of smaller width $m^*\le m$.

no code implementations • 5 Feb 2019 • Grant Rotskoff, Samy Jelassi, Joan Bruna, Eric Vanden-Eijnden

Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "overparameterized" models.

no code implementations • NeurIPS 2018 • Grant Rotskoff, Eric Vanden-Eijnden

The performance of neural networks on high-dimensional data distributions suggests that it may be possible to parameterize a representation of a given high-dimensional function with controllably small errors, potentially outperforming standard interpolation methods.

2 code implementations • 28 Sep 2018 • Grant M. Rotskoff, Eric Vanden-Eijnden

Nonequilibrium sampling is potentially much more versatile than its equilibrium counterpart, but it comes with challenges because the invariant distribution is not typically known when the dynamics breaks detailed balance.

Statistical Mechanics

no code implementations • 2 May 2018 • Grant M. Rotskoff, Eric Vanden-Eijnden

We show that, when the number $n$ of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of $n$, with a resulting approximation error that universally scales as $O(n^{-1})$.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.