no code implementations • NeurIPS 2007 • Claudia Clopath, André Longtin, Wulfram Gerstner
Independent component analysis (ICA) is a powerful method to decouple signals.
no code implementations • NeurIPS 2009 • Henning Sprekeler, Guillaume Hennequin, Wulfram Gerstner
Here, we show that different learning rules emerge from a policy gradient approach depending on which features of the spike trains are assumed to influence the reward signals, i. e., depending on which neural code is in effect.
no code implementations • NeurIPS 2010 • Felipe Gerhard, Wulfram Gerstner
Generalized Linear Models (GLMs) are an increasingly popular framework for modeling neural spike trains.
no code implementations • NeurIPS 2011 • Danilo J. Rezende, Daan Wierstra, Wulfram Gerstner
We derive a plausible learning rule updating the synaptic efficacies for feedforward, feedback and lateral connections between observed and latent neurons.
no code implementations • NeurIPS 2011 • Skander Mensi, Richard Naud, Wulfram Gerstner
First we find the analytical expressions relating the subthreshold voltage from the Adaptive Exponential Integrate-and-Fire model (AdEx) to the Spike-Response Model with escape noise (SRM as an example of a GLM).
no code implementations • NeurIPS 2015 • Dane S. Corneil, Wulfram Gerstner
We show that this representation can be implemented in a neural attractor network model, resulting in bump--like activity profiles resembling those of the CA3 region of hippocampus.
no code implementations • 4 Jan 2016 • Carlos S. N. Brito, Wulfram Gerstner
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity.
no code implementations • 17 Jun 2016 • Mohammadjavad Faraji, Kerstin Preuschoff, Wulfram Gerstner
Surprise describes a range of phenomena from unexpected events to behavioral responses.
no code implementations • 23 Jun 2016 • Florian Colombo, Samuel P. Muscinelli, Alexander Seeholzer, Johanni Brea, Wulfram Gerstner
A big challenge in algorithmic composition is to devise a model that is both easily trainable and able to reproduce the long-range temporal dependencies typical of music.
no code implementations • 9 Dec 2016 • Thomas Mesnard, Wulfram Gerstner, Johanni Brea
In machine learning, error back-propagation in multi-layer neural networks (deep learning) has been impressively successful in supervised and reinforcement learning tasks.
no code implementations • 21 Feb 2017 • Aditya Gilra, Wulfram Gerstner
The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics, while an online and local rule changes the weights.
2 code implementations • 28 Dec 2017 • Marco Martinolli, Wulfram Gerstner, Aditya Gilra
Learning and memory are intertwined in our brain and their relationship is at the core of several recent neural network models.
1 code implementation • ICML 2018 • Aditya Gilra, Wulfram Gerstner
Here, we employ a supervised scheme, Feedback-based Online Local Learning Of Weights (FOLLOW), to train a network of heterogeneous spiking neurons with hidden layers, to control a two-link arm so as to reproduce a desired state trajectory.
1 code implementation • ICML 2018 • Dane Corneil, Wulfram Gerstner, Johanni Brea
Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i. e. they typically require significantly more playing experience than humans to reach an equal performance level.
no code implementations • 17 Dec 2018 • Florian Colombo, Johanni Brea, Wulfram Gerstner
As deep learning advances, algorithms of music composition increase in performance.
1 code implementation • 27 Feb 2019 • Bernd Illing, Wulfram Gerstner, Johanni Brea
These spiking models achieve > 98. 2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation.
no code implementations • 5 Jul 2019 • Johanni Brea, Berfin Simsek, Bernd Illing, Wulfram Gerstner
The permutation symmetry of neurons in each layer of a deep neural network gives rise not only to multiple equivalent global minima of the loss function, but also to first-order saddle points located on the path between the global minima.
no code implementations • 5 Jul 2019 • Vasiliki Liakoni, Alireza Modirshanechi, Wulfram Gerstner, Johanni Brea
Surprise-based learning allows agents to rapidly adapt to non-stationary stochastic environments characterized by sudden changes.
no code implementations • 25 Sep 2019 • Berfin Simsek, Johanni Brea, Bernd Illing, Wulfram Gerstner
In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \ldots, d$, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange.
1 code implementation • NeurIPS Workshop Neuro_AI 2019 • Roman Pogodin, Dane Corneil, Alexander Seeholzer, Joseph Heng, Wulfram Gerstner
Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance.
1 code implementation • NeurIPS 2021 • Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner
Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown.
no code implementations • 21 May 2021 • Carlos Stein N. Brito, Wulfram Gerstner
While existing synaptic plasticity models reproduce some of the observed receptive-field properties, a major obstacle is the sensitivity of Hebbian learning to omnipresent spurious correlations in cortical networks which can overshadow relevant latent input features.
1 code implementation • 25 May 2021 • Berfin Şimşek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, Johanni Brea
For a two-layer overparameterized network of width $ r^*+ h =: m $ we explicitly describe the manifold of global minima: it consists of $ T(r^*, m) $ affine subspaces of dimension at least $ h $ that are connected to one another.
1 code implementation • NeurIPS 2021 • Guillaume Bellec, Shuqi Wang, Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner
Fitting network models to neural activity is an important tool in neuroscience.
1 code implementation • 26 May 2022 • Shuqi Wang, Valentin Schmutz, Guillaume Bellec, Wulfram Gerstner
Can we use spiking neural networks (SNN) as generative models of multi-neuronal recordings, while taking into account that most neurons are unobserved?
no code implementations • 19 Aug 2022 • Georgios Iatropoulos, Johanni Brea, Wulfram Gerstner
We consider the problem of training a neural network to store a set of patterns with maximal noise robustness.
no code implementations • 2 Sep 2022 • Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner
Going beyond this technical analysis, we propose a taxonomy of surprise definitions and classify them into four conceptual categories based on the quantity they measure: (i) 'prediction surprise' measures a mismatch between a prediction and an observation; (ii) 'change-point detection surprise' measures the probability of a change in the environment; (iii) 'confidence-corrected surprise' explicitly accounts for the effect of confidence; and (iv) 'information gain surprise' measures the belief-update upon a new observation.
no code implementations • 23 Dec 2022 • Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence.
2 code implementations • 25 Jan 2023 • Johanni Brea, Flavio Martinelli, Berfin Şimşek, Wulfram Gerstner
MLPGradientFlow is a software package to solve numerically the gradient flow differential equation $\dot \theta = -\nabla \mathcal L(\theta; \mathcal D)$, where $\theta$ are the parameters of a multi-layer perceptron, $\mathcal D$ is some data set, and $\nabla \mathcal L$ is the gradient of a loss function.
no code implementations • 9 Mar 2023 • Valentin Schmutz, Johanni Brea, Wulfram Gerstner
Can the dynamics of Spiking Neural Networks (SNNs) approximate the dynamics of Recurrent Neural Networks (RNNs)?
no code implementations • 25 Apr 2023 • Flavio Martinelli, Berfin Simsek, Wulfram Gerstner, Johanni Brea
Can we identify the parameters of a neural network by probing its input-output mapping?
1 code implementation • 2 Jun 2023 • Martin Barry, Wulfram Gerstner, Guillaume Bellec
"You never forget how to ride a bike", -- but how is that possible?
1 code implementation • NeurIPS 2023 • Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec
A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials.
no code implementations • 14 Jun 2023 • Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner
Communication by rare, binary spikes is a key factor for the energy efficiency of biological brains.
1 code implementation • NeurIPS 2023 • Berfin Şimşek, Amire Bendjeddou, Wulfram Gerstner, Johanni Brea
Approximating $f^*$ with a neural network with $n< k$ neurons can thus be seen as fitting an under-parameterized "student" network with $n$ neurons to a "teacher" network with $k$ neurons.
no code implementations • ICLR 2019 • Bernd Illing, Wulfram Gerstner, Johanni Brea
An appealing alternative to training deep neural networks is to use one or a few hidden layers with fixed random weights or trained with an unsupervised, local learning rule and train a single readout layer with a supervised, local learning rule.