Search Results for author: Arild Nøkland

Found 4 papers, 4 papers with code

Training Neural Networks with Local Error Signals

1 code implementation20 Jan 2019 Arild Nøkland, Lars Hiller Eidnes

We use single-layer sub-networks and two different supervised loss functions to generate local error signals for the hidden layers, and we show that the combination of these losses help with optimization in the context of local learning.

Image Classification

Shifting Mean Activation Towards Zero with Bipolar Activation Functions

1 code implementation ICLR 2018 Lars Eidnes, Arild Nøkland

We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning in this setting.

General Classification Language Modelling

Direct Feedback Alignment Provides Learning in Deep Neural Networks

5 code implementations NeurIPS 2016 Arild Nøkland

In this work, the feedback alignment principle is used for training hidden layers more independently from the rest of the network, and from a zero initial condition.

Improving Back-Propagation by Adding an Adversarial Gradient

1 code implementation14 Oct 2015 Arild Nøkland

The first experimental results on MNIST show that the "adversarial back-propagation" method increases the resistance to adversarial examples and boosts the classification performance.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.