Search Results for author: Wulfram Gerstner

Found 26 papers, 8 papers with code

Localized random projections challenge benchmarks for bio-plausible deep learning

no code implementations ICLR 2019 Bernd Illing, Wulfram Gerstner, Johanni Brea

An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule.

General Classification Object Recognition

Mesoscopic modeling of hidden spiking neurons

no code implementations26 May 2022 Shuqi Wang, Valentin Schmutz, Guillaume Bellec, Wulfram Gerstner

Can we use spiking neural networks (SNN) as generative models of multi-neuronal recordings, while taking into account that most neurons are unobserved?

Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances

1 code implementation25 May 2021 Berfin Şimşek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, Johanni Brea

For a two-layer overparameterized network of width $ r^*+ h =: m $ we explicitly describe the manifold of global minima: it consists of $ T(r^*, m) $ affine subspaces of dimension at least $ h $ that are connected to one another.

Correlation-invariant synaptic plasticity

no code implementations21 May 2021 Carlos S. N. Brito, Wulfram Gerstner

Here we develop a theory for synaptic plasticity that is invariant to second-order correlations in the input.

Local plasticity rules can learn deep representations using self-supervised contrastive predictions

1 code implementation NeurIPS 2021 Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner

Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown.

Working memory facilitates reward-modulated Hebbian learning in recurrent neural networks

1 code implementation NeurIPS Workshop Neuro_AI 2019 Roman Pogodin, Dane Corneil, Alexander Seeholzer, Joseph Heng, Wulfram Gerstner

Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.

Weight-space symmetry in neural network loss landscapes revisited

no code implementations25 Sep 2019 Berfin Simsek, Johanni Brea, Bernd Illing, Wulfram Gerstner

In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange.

Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape

no code implementations5 Jul 2019 Johanni Brea, Berfin Simsek, Bernd Illing, Wulfram Gerstner

The permutation symmetry of neurons in each layer of a deep neural network gives rise not only to multiple equivalent global minima of the loss function, but also to first-order saddle points located on the path between the global minima.

Learning in Volatile Environments with the Bayes Factor Surprise

no code implementations5 Jul 2019 Vasiliki Liakoni, Alireza Modirshanechi, Wulfram Gerstner, Johanni Brea

Surprise-based learning allows agents to rapidly adapt to non-stationary stochastic environments characterized by sudden changes.

Bayesian Inference

Biologically plausible deep learning -- but how far can we go with shallow networks?

1 code implementation27 Feb 2019 Bernd Illing, Wulfram Gerstner, Johanni Brea

These spiking models achieve > 98. 2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation.

Learning to Generate Music with BachProp

no code implementations17 Dec 2018 Florian Colombo, Johanni Brea, Wulfram Gerstner

As deep learning advances, algorithms of music composition increase in performance.

Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation

1 code implementation ICML 2018 Dane Corneil, Wulfram Gerstner, Johanni Brea

Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i. e. they typically require significantly more playing experience than humans to reach an equal performance level.


Non-linear motor control by local learning in spiking neural networks

1 code implementation ICML 2018 Aditya Gilra, Wulfram Gerstner

Here, we employ a supervised scheme, Feedback-based Online Local Learning Of Weights (FOLLOW), to train a network of heterogeneous spiking neurons with hidden layers, to control a two-link arm so as to reproduce a desired state trajectory.

Multi-timescale memory dynamics in a reinforcement learning network with attention-gated memory

2 code implementations28 Dec 2017 Marco Martinolli, Wulfram Gerstner, Aditya Gilra

Learning and memory are intertwined in our brain and their relationship is at the core of several recent neural network models.


Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

no code implementations21 Feb 2017 Aditya Gilra, Wulfram Gerstner

The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics, while an online and local rule changes the weights.

Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity

no code implementations9 Dec 2016 Thomas Mesnard, Wulfram Gerstner, Johanni Brea

In machine learning, error back-propagation in multi-layer neural networks (deep learning) has been impressively successful in supervised and reinforcement learning tasks.

General Classification reinforcement-learning

Algorithmic Composition of Melodies with Deep Recurrent Neural Networks

no code implementations23 Jun 2016 Florian Colombo, Samuel P. Muscinelli, Alexander Seeholzer, Johanni Brea, Wulfram Gerstner

A big challenge in algorithmic composition is to devise a model that is both easily trainable and able to reproduce the long-range temporal dependencies typical of music.

Nonlinear Hebbian learning as a unifying principle in receptive field formation

no code implementations4 Jan 2016 Carlos S. N. Brito, Wulfram Gerstner

The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity.

Attractor Network Dynamics Enable Preplay and Rapid Path Planning in Maze–like Environments

no code implementations NeurIPS 2015 Dane S. Corneil, Wulfram Gerstner

We show that this representation can be implemented in a neural attractor network model, resulting in bump--like activity profiles resembling those of the CA3 region of hippocampus.


Variational Learning for Recurrent Spiking Networks

no code implementations NeurIPS 2011 Danilo J. Rezende, Daan Wierstra, Wulfram Gerstner

We derive a plausible learning rule updating the synaptic efficacies for feedforward, feedback and lateral connections between observed and latent neurons.

Variational Inference

From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models

no code implementations NeurIPS 2011 Skander Mensi, Richard Naud, Wulfram Gerstner

First we find the analytical expressions relating the subthreshold voltage from the Adaptive Exponential Integrate-and-Fire model (AdEx) to the Spike-Response Model with escape noise (SRM as an example of a GLM).

Code-specific policy gradient rules for spiking neurons

no code implementations NeurIPS 2009 Henning Sprekeler, Guillaume Hennequin, Wulfram Gerstner

Here, we show that different learning rules emerge from a policy gradient approach depending on which features of the spike trains are assumed to influence the reward signals, i. e., depending on which neural code is in effect.

Cannot find the paper you are looking for? You can Submit a new open access paper.