no code implementations • 14 Dec 2023 • Khaleelulla Khan Nazeer, Mark Schöne, Rishav Mukherji, Bernhard Vogginger, Christian Mayr, David Kappel, Anand Subramoney
In this work, we demonstrate the first-ever implementation of a language model on a neuromorphic device - specifically the SpiNNaker 2 chip - based on a recently published event-based architecture called the EGRU.
no code implementations • 13 Nov 2023 • Rishav Mukherji, Mark Schöne, Khaleelulla Khan Nazeer, Christian Mayr, Anand Subramoney
Yet, sparse activations, while omnipresent in both biological neural networks and deep learning systems, have not been fully utilized as a compression technique in deep learning.
1 code implementation • 9 Jun 2023 • Edoardo W. Grappolini, Anand Subramoney
We show that training ONLY the delays in feed-forward spiking networks using backpropagation can achieve performance comparable to the more conventional weight training.
no code implementations • 24 May 2023 • David Kappel, Khaleelulla Khan Nazeer, Cabrel Teguemne Fokam, Christian Mayr, Anand Subramoney
In addition, back-propagation relies on the transpose of forward weight matrices to compute updates, introducing a weight transport problem across the network.
no code implementations • 10 Mar 2023 • Anand Subramoney
Real-Time Recurrent Learning (RTRL) allows online learning, and the growth of required memory is independent of sequence length.
1 code implementation • 13 Jun 2022 • Anand Subramoney, Khaleelulla Khan Nazeer, Mark Schöne, Christian Mayr, David Kappel
However, there is still a need to bridge the gap between what RNNs are capable of in terms of efficiency and performance and real-world application requirements.
Ranked #2 on Gesture Recognition on DVS128 Gesture (using extra training data)
no code implementations • 28 Feb 2022 • Alper Yegenoglu, Anand Subramoney, Thorsten Hater, Cristian Jimenez-Romero, Wouter Klijn, Aaron Perez Martin, Michiel van der Vlag, Michael Herty, Abigail Morrison, Sandra Diaz-Pier
Neuroscience models commonly have a high number of degrees of freedom and only specific regions within the parameter space are able to produce dynamics of interest.
1 code implementation • 3 Mar 2020 • Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.
no code implementations • 16 Sep 2019 • Anand Subramoney, Franz Scherr, Wolfgang Maass
We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.
1 code implementation • NeurIPS 2018 • Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass
Recurrent networks of spiking neurons (RSNNs) underlie the astounding computing and learning capabilities of the brain.
Ranked #22 on Speech Recognition on TIMIT
no code implementations • 17 Mar 2017 • Mihai A. Petrovici, Sebastian Schmitt, Johann Klähn, David Stöckel, Anna Schroeder, Guillaume Bellec, Johannes Bill, Oliver Breitwieser, Ilja Bytschok, Andreas Grübl, Maurice Güttler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Sebastian Jeltsch, Vitali Karasenko, Mitja Kleider, Christoph Koke, Alexander Kononov, Christian Mauch, Eric Müller, Paul Müller, Johannes Partzsch, Thomas Pfeil, Stefan Schiefer, Stefan Scholze, Anand Subramoney, Vasilis Thanasoulis, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, René Schüffny, Christian Mayr, Johannes Schemmel, Karlheinz Meier
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks.