Search Results for author: Grant M. Rotskoff

Found 6 papers, 3 papers with code

Cooperative multi-agent reinforcement learning for high-dimensional nonequilibrium control

1 code implementation12 Nov 2021 Shriram Chennakesavalu, Grant M. Rotskoff

Experimental advances enabling high-resolution external control create new opportunities to produce materials with exotic properties.

Multi-agent Reinforcement Learning

Efficient Bayesian Sampling Using Normalizing Flows to Assist Markov Chain Monte Carlo Methods

no code implementations ICML Workshop INNF 2021 Marylou Gabrié, Grant M. Rotskoff, Eric Vanden-Eijnden

Normalizing flows can generate complex target distributions and thus show promise in many applications in Bayesian statistics as an alternative or complement to MCMC for sampling posteriors.

A Dynamical Central Limit Theorem for Shallow Neural Networks

no code implementations NeurIPS 2020 Zhengdao Chen, Grant M. Rotskoff, Joan Bruna, Eric Vanden-Eijnden

Furthermore, if the mean-field dynamics converges to a measure that interpolates the training data, we prove that the asymptotic deviation eventually vanishes in the CLT scaling.

Active Importance Sampling for Variational Objectives Dominated by Rare Events: Consequences for Optimization and Generalization

1 code implementation11 Aug 2020 Grant M. Rotskoff, Andrew R. Mitchell, Eric Vanden-Eijnden

Deep neural networks, when optimized with sufficient data, provide accurate representations of high-dimensional functions; in contrast, function approximation techniques that have predominated in scientific computing do not scale well with dimensionality.

Learning Theory

Dynamical computation of the density of states and Bayes factors using nonequilibrium importance sampling

2 code implementations28 Sep 2018 Grant M. Rotskoff, Eric Vanden-Eijnden

Nonequilibrium sampling is potentially much more versatile than its equilibrium counterpart, but it comes with challenges because the invariant distribution is not typically known when the dynamics breaks detailed balance.

Statistical Mechanics

Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach

no code implementations2 May 2018 Grant M. Rotskoff, Eric Vanden-Eijnden

We show that, when the number $n$ of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of $n$, with a resulting approximation error that universally scales as $O(n^{-1})$.

Cannot find the paper you are looking for? You can Submit a new open access paper.