no code implementations • 25 Oct 2023 • Wolfgang Maass
But current architectures and training methods for networks of spiking neurons in NMHW are largely copied from artificial neural networks.
1 code implementation • 8 Jul 2021 • Philipp Plank, Arjun Rao, Andreas Wild, Wolfgang Maass
Spike-based neuromorphic hardware holds the promise to provide more energy efficient implementations of Deep Neural Networks (DNNs) than standard hardware such as GPUs.
no code implementations • 27 Jun 2021 • Wolfgang Maass, Veda C. Storey
We then examine how conceptual modeling can be applied to machine learning and propose a framework for incorporating conceptual modeling into data science projects.
no code implementations • 12 May 2021 • Luke Y. Prince, Roy Henha Eyono, Ellen Boven, Arna Ghosh, Joe Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass, Blake A. Richards, Cristina Savin, Katharina Anna Wilmes
We provide a brief review of the common assumptions about biological learning with findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks.
1 code implementation • 24 Jul 2020 • Thomas Bohnstingl, Stanisław Woźniak, Wolfgang Maass, Angeliki Pantazi, Evangelos Eleftheriou
For shallow networks, OSTL is gradient-equivalent to BPTT enabling for the first time online training of SNNs with BPTT-equivalent gradients.
1 code implementation • 3 Mar 2020 • Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.
1 code implementation • 31 Jan 2020 • Christoph Stöckl, Wolfgang Maass
Spike-based neuromorphic hardware promises to reduce the energy consumption of image classification and other deep learning applications, particularly on mobile phones or other edge devices.
no code implementations • 30 Dec 2019 • Christoph Stöckl, Wolfgang Maass
We introduce a new conversion method where a gate in the ANN, which can basically be of any type, is emulated by a small circuit of spiking neurons, with At Most One Spike (AMOS) per neuron.
no code implementations • 16 Sep 2019 • Anand Subramoney, Franz Scherr, Wolfgang Maass
We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.
no code implementations • 20 Mar 2019 • Yexin Yan, David Kappel, Felix Neumaerker, Johannes Partzsch, Bernhard Vogginger, Sebastian Hoeppner, Steve Furber, Wolfgang Maass, Robert Legenstein, Christian Mayr
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources.
no code implementations • 15 Mar 2019 • Thomas Bohnstingl, Franz Scherr, Christian Pehle, Karlheinz Meier, Wolfgang Maass
In contrast, the hyperparameters and learning algorithms of networks of neurons in the brain, which they aim to emulate, have been optimized through extensive evolutionary and developmental processes for specific ranges of computing and learning tasks.
3 code implementations • 25 Jan 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass
This lack of understanding is linked to a lack of learning algorithms for recurrent networks of spiking neurons (RSNNs) that are both functionally powerful and can be implemented by known biological mechanisms.
no code implementations • NeurIPS 2018 • Nima Anari, Constantinos Daskalakis, Wolfgang Maass, Christos H. Papadimitriou, Amin Saberi, Santosh Vempala
We give an application to recovering assemblies of neurons.
1 code implementation • NeurIPS 2018 • Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain.
Ranked #22 on Speech Recognition on TIMIT
4 code implementations • ICLR 2018 • Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.
no code implementations • 13 Apr 2017 • David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh, Wolfgang Maass
These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have.
no code implementations • 17 Mar 2017 • Mihai A. Petrovici, Sebastian Schmitt, Johann Klähn, David Stöckel, Anna Schroeder, Guillaume Bellec, Johannes Bill, Oliver Breitwieser, Ilja Bytschok, Andreas Grübl, Maurice Güttler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Sebastian Jeltsch, Vitali Karasenko, Mitja Kleider, Christoph Koke, Alexander Kononov, Christian Mauch, Eric Müller, Paul Müller, Johannes Partzsch, Thomas Pfeil, Stefan Schiefer, Stefan Scholze, Anand Subramoney, Vasilis Thanasoulis, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, René Schüffny, Christian Mayr, Johannes Schemmel, Karlheinz Meier
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks.
1 code implementation • 6 Mar 2017 • Sebastian Schmitt, Johann Klaehn, Guillaume Bellec, Andreas Gruebl, Maurice Guettler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Vitali Karasenko, Mitja Kleider, Christoph Koke, Christian Mauch, Eric Mueller, Paul Mueller, Johannes Partzsch, Mihai A. Petrovici, Stefan Schiefer, Stefan Scholze, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, Christian Mayr, Johannes Schemmel, Karlheinz Meier
In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate.
no code implementations • 1 Jun 2016 • Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass
Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.
no code implementations • NeurIPS 2015 • David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass
We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks.
1 code implementation • 20 Apr 2015 • David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass
General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference.
no code implementations • 18 Dec 2014 • Zeno Jonke, Stefan Habenschuss, Wolfgang Maass
Furthermore, one can demonstrate for the Traveling Salesman Problem a surprising computational advantage of networks of spiking neurons compared with traditional artificial neural networks and Gibbs sampling.
no code implementations • NeurIPS 2009 • Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass
We show here that STDP, in conjunction with a stochastic soft winner-take-all (WTA) circuit, induces spiking neurons to generate through their synaptic weights implicit internal models for subclasses (or causes") of the high-dimensional spike patterns of hundreds of pre-synaptic neurons.
no code implementations • NeurIPS 2009 • Steven Chase, Andrew Schwartz, Wolfgang Maass, Robert A. Legenstein
It was recently shown that tuning properties of neurons in monkey motor cortex are adapted selectively in order to compensate for an erroneous interpretation of their activity.
no code implementations • NeurIPS 2009 • Stefan Klampfl, Wolfgang Maass
Many models for computations in recurrent networks of neurons assume that the network state moves from some initial state to some fixed point attractor or limit cycle that represents the output of the computation.
no code implementations • NeurIPS 2008 • Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass
Uncertainty is omnipresent when we perceive or interact with our environment, and the Bayesian framework provides computational methods for dealing with it.
no code implementations • NeurIPS 2007 • Lars Buesing, Wolfgang Maass
We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived.