no code implementations • 23 Jun 2023 • Jamie F. Mair, Luke Causer, Juan P. Garrahan
Most iterative neural network training methods use estimates of the loss function over small random subsets (or minibatches) of the data to update the parameters, which aid in decoupling the training time from the (often very large) size of the training datasets.
no code implementations • 22 Sep 2022 • Jamie F. Mair, Dominic C. Rose, Juan P. Garrahan
In machine learning, there is renewed interest in neural network ensembles (NNEs), whereby predictions are obtained as an aggregate from a diverse set of smaller models, rather than from a single larger model.
1 code implementation • 26 May 2020 • Dominic C. Rose, Jamie F. Mair, Juan P. Garrahan
By minimising the distance between a reweighted ensemble and that of a suitably parametrised controlled dynamics we arrive at a set of methods similar to those of RL to numerically approximate the optimal dynamics that realises the rare behaviour of interest.