1 code implementation • 14 Aug 2024 • Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein
As a promising advancement, a computationally light augmentation of the LIF neuron model with an adaptation mechanism experienced a recent upswing in popularity, caused by demonstrations of its superior performance on spatio-temporal processing tasks.
no code implementations • 22 Apr 2024 • Thomas Ortner, Horst Petschenig, Athanasios Vasilopoulos, Roland Renner, Špela Brglez, Thomas Limbacher, Enrique Piñero, Alejandro Linares Barranco, Angeliki Pantazi, Robert Legenstein
In this work, we pair L2L with in-memory computing NMHW based on phase-change memory devices to build efficient AI models that can rapidly adapt to new tasks.
1 code implementation • 15 Nov 2023 • Ozan Özdenizci, Robert Legenstein
Spiking neural networks (SNNs) provide an energy-efficient alternative to a variety of artificial neural network (ANN) based AI applications.
1 code implementation • 29 Jul 2022 • Ozan Özdenizci, Robert Legenstein
Image restoration under adverse weather conditions has been of significant interest for various computer vision applications.
Ranked #3 on
Single Image Deraining
on Raindrop
1 code implementation • 23 May 2022 • Thomas Limbacher, Ozan Özdenizci, Robert Legenstein
Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years.
Ranked #7 on
Question Answering
on bAbi
1 code implementation • CVPR 2022 • Ozan Özdenizci, Robert Legenstein
Experimental benchmark evaluations show that output code matching is superior to existing regularized weight quantization based defenses, and an effective defense against stealthy weight bit-flip attacks.
1 code implementation • NeurIPS 2020 • Thomas Limbacher, Robert Legenstein
We show that the network can be optimized to utilize the Hebbian plasticity processes for its computations.
Ranked #4 on
Question Answering
on bAbi
no code implementations • 19 Jun 2020 • Agnes Korcsak-Gorzo, Michael G. Müller, Andreas Baumbach, Luziwei Leng, Oliver Julien Breitwieser, Sacha J. van Albada, Walter Senn, Karlheinz Meier, Robert Legenstein, Mihai A. Petrovici
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately.
1 code implementation • 3 Mar 2020 • Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.
no code implementations • 20 Mar 2019 • Yexin Yan, David Kappel, Felix Neumaerker, Johannes Partzsch, Bernhard Vogginger, Sebastian Hoeppner, Steve Furber, Wolfgang Maass, Robert Legenstein, Christian Mayr
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources.
3 code implementations • 25 Jan 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass
This lack of understanding is linked to a lack of learning algorithms for recurrent networks of spiking neurons (RSNNs) that are both functionally powerful and can be implemented by known biological mechanisms.
1 code implementation • NeurIPS 2018 • Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain.
Ranked #22 on
Speech Recognition
on TIMIT
4 code implementations • ICLR 2018 • Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.
no code implementations • 13 Apr 2017 • David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh, Wolfgang Maass
These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have.
no code implementations • 17 Mar 2017 • Mihai A. Petrovici, Sebastian Schmitt, Johann Klähn, David Stöckel, Anna Schroeder, Guillaume Bellec, Johannes Bill, Oliver Breitwieser, Ilja Bytschok, Andreas Grübl, Maurice Güttler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Sebastian Jeltsch, Vitali Karasenko, Mitja Kleider, Christoph Koke, Alexander Kononov, Christian Mauch, Eric Müller, Paul Müller, Johannes Partzsch, Thomas Pfeil, Stefan Schiefer, Stefan Scholze, Anand Subramoney, Vasilis Thanasoulis, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, René Schüffny, Christian Mayr, Johannes Schemmel, Karlheinz Meier
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks.
1 code implementation • 6 Mar 2017 • Sebastian Schmitt, Johann Klaehn, Guillaume Bellec, Andreas Gruebl, Maurice Guettler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Vitali Karasenko, Mitja Kleider, Christoph Koke, Christian Mauch, Eric Mueller, Paul Mueller, Johannes Partzsch, Mihai A. Petrovici, Stefan Schiefer, Stefan Scholze, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, Christian Mayr, Johannes Schemmel, Karlheinz Meier
In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate.
no code implementations • 1 Jun 2016 • Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass
Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.
no code implementations • NeurIPS 2015 • David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass
We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks.
1 code implementation • 20 Apr 2015 • David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass
General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference.