no code implementations • 1 May 2024 • Rishav Mukherji, Mark Schöne, Khaleelulla Khan Nazeer, Christian Mayr, David Kappel, Anand Subramoney
Activity and parameter sparsity are two standard methods of making neural networks computationally more efficient.
1 code implementation • 29 Apr 2024 • Mark Schöne, Neeraj Mohan Sushma, Jingyue Zhuge, Christian Mayr, Anand Subramoney, David Kappel
While prior methods can process up to a few thousand time steps, our model, based on modern recurrent deep state-space models, scales to event streams of millions of events for both training and inference. We leverage their stable parameterization for learning long-range dependencies, parallelizability along the sequence dimension, and their ability to integrate asynchronous events effectively to scale them up to long event streams. We further augment these with novel event-centric techniques enabling our model to match or beat the state-of-the-art performance on several event stream benchmarks.
Ranked #1 on Audio Classification on SSC
no code implementations • 4 Feb 2024 • Bernhard Vogginger, Amirhossein Rostami, Vaibhav Jain, Sirine Arfa, Andreas Hantsch, David Kappel, Michael Schäfer, Ulrike Faltings, Hector A. Gonzalez, Chen Liu, Christian Mayr, Wolfgang Maaß
In this article, we try to analyze the underlying reasons for this and derive requirements and guidelines to promote neuromorphic systems for efficient and sustainable cloud computing: We first review currently available neuromorphic hardware systems and collect examples where neuromorphic solutions excel conventional AI processing on CPUs and GPUs.
no code implementations • 14 Dec 2023 • Khaleelulla Khan Nazeer, Mark Schöne, Rishav Mukherji, Bernhard Vogginger, Christian Mayr, David Kappel, Anand Subramoney
In this work, we demonstrate the first-ever implementation of a language model on a neuromorphic device - specifically the SpiNNaker 2 chip - based on a recently published event-based architecture called the EGRU.
no code implementations • 24 May 2023 • David Kappel, Khaleelulla Khan Nazeer, Cabrel Teguemne Fokam, Christian Mayr, Anand Subramoney
In addition, back-propagation relies on the transpose of forward weight matrices to compute updates, introducing a weight transport problem across the network.
1 code implementation • 13 Jun 2022 • Anand Subramoney, Khaleelulla Khan Nazeer, Mark Schöne, Christian Mayr, David Kappel
However, there is still a need to bridge the gap between what RNNs are capable of in terms of efficiency and performance and real-world application requirements.
Ranked #3 on Gesture Recognition on DVS128 Gesture (using extra training data)
no code implementations • NeurIPS Workshop ICBINB 2021 • David Kappel, Franscesco Negri, Christian Tetzlaff
This general formulation allows us to use the model also for online learning where no knowledge about task switching times is given to the network.
no code implementations • 23 Mar 2021 • David Kappel, Christian Tetzlaff
The free energy principle (FEP) is a mathematical framework that describes how biological systems self-organize and survive in their environment.
1 code implementation • 3 Mar 2020 • Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.
no code implementations • 27 Dec 2019 • Sebastian Billaudelle, Benjamin Cramer, Mihai A. Petrovici, Korbinian Schreiber, David Kappel, Johannes Schemmel, Karlheinz Meier
In computational neuroscience, as well as in machine learning, neuromorphic devices promise an accelerated and scalable alternative to neural network simulations.
no code implementations • 14 Nov 2019 • Lukas Hahne, Timo Lüddecke, Florentin Wörgötter, David Kappel
Our proposed hybrid model, represents an alternative on learning abstract relations using self-attention and demonstrates that the Transformer network is also well suited for abstract visual reasoning.
no code implementations • 20 Mar 2019 • Yexin Yan, David Kappel, Felix Neumaerker, Johannes Partzsch, Bernhard Vogginger, Sebastian Hoeppner, Steve Furber, Wolfgang Maass, Robert Legenstein, Christian Mayr
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources.
4 code implementations • ICLR 2018 • Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.
no code implementations • 13 Apr 2017 • David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh, Wolfgang Maass
These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have.
no code implementations • 1 Jun 2016 • Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass
Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.
no code implementations • NeurIPS 2015 • David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass
We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks.
1 code implementation • 20 Apr 2015 • David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass
General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference.