1 code implementation • 2 Feb 2023 • Francois Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang
We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter.
1 code implementation • 17 May 2022 • Hoil Lee, Fadhel Ayed, Paul Jung, Juho Lee, Hongseok Yang, François Caron
Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals.
no code implementations • 18 Jun 2021 • Paul Jung, Hoil Lee, Jiho Lee, Hongseok Yang
We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks.