1 code implementation • 2 Feb 2023 • Francois Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang
We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter.
1 code implementation • 17 May 2022 • Hoil Lee, Fadhel Ayed, Paul Jung, Juho Lee, Hongseok Yang, François Caron
Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals.
no code implementations • 18 Jun 2021 • Paul Jung, Hoil Lee, Jiho Lee, Hongseok Yang
We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks.
no code implementations • 8 Dec 2020 • Djalil Chafaï, David García-Zelada, Paul Jung
We consider a one-dimensional classical Wigner jellium, not necessarily charge neutral, for which the electrons are allowed to exist beyond the support of the background charge.
Point Processes Probability Mathematical Physics Mathematical Physics Primary 82B05, 60K35, 60G55, Secondary 82D05, 62G30, 60G70