no code implementations • ICLR 2019 • Bernd Illing, Wulfram Gerstner, Johanni Brea
An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule.
1 code implementation • NeurIPS 2021 • Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner
Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown.
no code implementations • 25 Sep 2019 • Berfin Simsek, Johanni Brea, Bernd Illing, Wulfram Gerstner
In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange.
no code implementations • 5 Jul 2019 • Johanni Brea, Berfin Simsek, Bernd Illing, Wulfram Gerstner
The permutation symmetry of neurons in each layer of a deep neural network gives rise not only to multiple equivalent global minima of the loss function, but also to first-order saddle points located on the path between the global minima.
1 code implementation • 27 Feb 2019 • Bernd Illing, Wulfram Gerstner, Johanni Brea
These spiking models achieve > 98. 2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation.