1 code implementation • NeurIPS 2019 • Sindy Löwe, Peter O'Connor, Bastiaan S. Veeling
We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead.
Ranked #60 on Image Classification on STL-10
Representation Learning Self-Supervised Audio Classification +2
no code implementations • ICLR 2019 • Peter O'Connor, Efstratios Gavves, Max Welling
In response to this, Scellier & Bengio (2017) proposed Equilibrium Propagation - a method for gradient-based train- ing of neural networks which uses only local learning rules and, crucially, does not rely on neurons having a mechanism for back-propagating an error gradient.
1 code implementation • 5 Jul 2018 • Kaixin Hu, Peter O'Connor
For the navigation problem, we map the starting image and destination image to the latent space, then optimize a path on the learned manifold connecting the two points, and finally map the path back through decoder to a sequence of images.
no code implementations • ICLR 2018 • Riaan Zoetmulder, Efstratios Gavves, Peter O'Connor
Neural networks make mistakes.
1 code implementation • ICLR 2018 • Peter O'Connor, Efstratios Gavves, Max Welling
We present a variant on backpropagation for neural networks in which computation scales with the rate of change of the data - not the rate at which we process the data.
1 code implementation • 7 Nov 2016 • Peter O'Connor, Max Welling
Thus the amount of computation that the network does scales with the amount of change in the input and layer activations, rather than the size of the network.
1 code implementation • 26 Feb 2016 • Peter O'Connor, Max Welling
Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset.