1 code implementation • 20 Jan 2019 • Arild Nøkland, Lars Hiller Eidnes
We use single-layer sub-networks and two different supervised loss functions to generate local error signals for the hidden layers, and we show that the combination of these losses help with optimization in the context of local learning.
Ranked #3 on Image Classification on Kuzushiji-MNIST
1 code implementation • ICLR 2018 • Lars Eidnes, Arild Nøkland
We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning in this setting.
5 code implementations • NeurIPS 2016 • Arild Nøkland
In this work, the feedback alignment principle is used for training hidden layers more independently from the rest of the network, and from a zero initial condition.
1 code implementation • 14 Oct 2015 • Arild Nøkland
The first experimental results on MNIST show that the "adversarial back-propagation" method increases the resistance to adversarial examples and boosts the classification performance.