1 code implementation • 16 May 2022 • Stephen Whitelam, Viktor Selin, Ian Benlolo, Corneel Casert, Isaac Tamblyn
We examine the zero-temperature Metropolis Monte Carlo algorithm as a tool for training a neural network by minimizing a loss function.
no code implementations • 15 Aug 2020 • Stephen Whitelam, Viktor Selin, Sang-Won Park, Isaac Tamblyn
We show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise.