Search Results for author: Bernd Illing

Found 5 papers, 2 papers with code

Localized random projections challenge benchmarks for bio-plausible deep learning

no code implementations ICLR 2019 Bernd Illing, Wulfram Gerstner, Johanni Brea

An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule.

General Classification Object Recognition

Local plasticity rules can learn deep representations using self-supervised contrastive predictions

1 code implementation NeurIPS 2021 Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner

Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown.

Weight-space symmetry in neural network loss landscapes revisited

no code implementations25 Sep 2019 Berfin Simsek, Johanni Brea, Bernd Illing, Wulfram Gerstner

In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange.

Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape

no code implementations5 Jul 2019 Johanni Brea, Berfin Simsek, Bernd Illing, Wulfram Gerstner

The permutation symmetry of neurons in each layer of a deep neural network gives rise not only to multiple equivalent global minima of the loss function, but also to first-order saddle points located on the path between the global minima.

Biologically plausible deep learning -- but how far can we go with shallow networks?

1 code implementation27 Feb 2019 Bernd Illing, Wulfram Gerstner, Johanni Brea

These spiking models achieve > 98. 2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation.

Cannot find the paper you are looking for? You can Submit a new open access paper.