no code implementations • 29 Sep 2021 • Mark Tuddenham, Adam Prugel-Bennett, Jonathon Hare
The optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations.
no code implementations • NeurIPS Workshop DL-IG 2020 • Dominic Belcher, Adam Prugel-Bennett, Srinandan Dasmahapatra
Recent results in deep learning show that considering only the capacity of machines does not adequately explain the generalisation performance we can observe.
no code implementations • NeurIPS 2020 • Matthew Painter, Jonathon Hare, Adam Prugel-Bennett
In this work we empirically show that linear disentangled representations are not generally present in standard VAE models and that they instead require altering the loss landscape to induce them.
no code implementations • 20 Nov 2013 • Shaona Ghosh, Adam Prugel-Bennett
On-line linear optimization on combinatorial action sets (d-dimensional actions) with bandit feedback, is known to have complexity in the order of the dimension of the problem.