Search Results for author: Adam Prugel-Bennett

Found 4 papers, 0 papers with code

Orthogonalising gradients to speedup neural network optimisation

no code implementations29 Sep 2021 Mark Tuddenham, Adam Prugel-Bennett, Jonathon Hare

The optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations.

Generalisation and the Geometry of Class Separability

no code implementations NeurIPS Workshop DL-IG 2020 Dominic Belcher, Adam Prugel-Bennett, Srinandan Dasmahapatra

Recent results in deep learning show that considering only the capacity of machines does not adequately explain the generalisation performance we can observe.

Linear Disentangled Representations and Unsupervised Action Estimation

no code implementations NeurIPS 2020 Matthew Painter, Jonathon Hare, Adam Prugel-Bennett

In this work we empirically show that linear disentangled representations are not generally present in standard VAE models and that they instead require altering the loss landscape to induce them.

Disentanglement

Extended Formulations for Online Linear Bandit Optimization

no code implementations20 Nov 2013 Shaona Ghosh, Adam Prugel-Bennett

On-line linear optimization on combinatorial action sets (d-dimensional actions) with bandit feedback, is known to have complexity in the order of the dimension of the problem.

Efficient Exploration

Cannot find the paper you are looking for? You can Submit a new open access paper.