no code implementations • 2 Feb 2024 • Martim Lisboa, Guillaume Bellec
Neurons in the brain communicate information via punctual events called spikes.
no code implementations • 14 Jun 2023 • Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner
Communication by rare, binary spikes is a key factor for the energy efficiency of biological brains.
1 code implementation • NeurIPS 2023 • Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec
A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials.
1 code implementation • 2 Jun 2023 • Martin Barry, Wulfram Gerstner, Guillaume Bellec
"You never forget how to ride a bike", -- but how is that possible?
no code implementations • 23 Dec 2022 • Ana Stanojevic, Stanisław Woźniak, Guillaume Bellec, Giovanni Cherubini, Angeliki Pantazi, Wulfram Gerstner
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence.
1 code implementation • 26 May 2022 • Shuqi Wang, Valentin Schmutz, Guillaume Bellec, Wulfram Gerstner
Can we use spiking neural networks (SNN) as generative models of multi-neuronal recordings, while taking into account that most neurons are unobserved?
1 code implementation • NeurIPS 2021 • Guillaume Bellec, Shuqi Wang, Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner
Fitting network models to neural activity is an important tool in neuroscience.
1 code implementation • NeurIPS 2021 • Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner
Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.
3 code implementations • 25 Jan 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass
This lack of understanding is linked to a lack of learning algorithms for recurrent networks of spiking neurons (RSNNs) that are both functionally powerful and can be implemented by known biological mechanisms.
1 code implementation • NeurIPS 2018 • Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain.
Ranked #22 on Speech Recognition on TIMIT
4 code implementations • ICLR 2018 • Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.
no code implementations • 17 Mar 2017 • Mihai A. Petrovici, Sebastian Schmitt, Johann Klähn, David Stöckel, Anna Schroeder, Guillaume Bellec, Johannes Bill, Oliver Breitwieser, Ilja Bytschok, Andreas Grübl, Maurice Güttler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Sebastian Jeltsch, Vitali Karasenko, Mitja Kleider, Christoph Koke, Alexander Kononov, Christian Mauch, Eric Müller, Paul Müller, Johannes Partzsch, Thomas Pfeil, Stefan Schiefer, Stefan Scholze, Anand Subramoney, Vasilis Thanasoulis, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, René Schüffny, Christian Mayr, Johannes Schemmel, Karlheinz Meier
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks.
1 code implementation • 6 Mar 2017 • Sebastian Schmitt, Johann Klaehn, Guillaume Bellec, Andreas Gruebl, Maurice Guettler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Vitali Karasenko, Mitja Kleider, Christoph Koke, Christian Mauch, Eric Mueller, Paul Mueller, Johannes Partzsch, Mihai A. Petrovici, Stefan Schiefer, Stefan Scholze, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, Christian Mayr, Johannes Schemmel, Karlheinz Meier
In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate.