Search Results for author: Anand Subramoney

Found 12 papers, 4 papers with code

Language Modeling on a SpiNNaker 2 Neuromorphic Chip

no code implementations14 Dec 2023 Khaleelulla Khan Nazeer, Mark Schöne, Rishav Mukherji, Bernhard Vogginger, Christian Mayr, David Kappel, Anand Subramoney

In this work, we demonstrate the first-ever implementation of a language model on a neuromorphic device - specifically the SpiNNaker 2 chip - based on a recently published event-based architecture called the EGRU.

Gesture Recognition Language Modelling

Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference

no code implementations13 Nov 2023 Rishav Mukherji, Mark Schöne, Khaleelulla Khan Nazeer, Christian Mayr, Anand Subramoney

Yet, sparse activations, while omnipresent in both biological neural networks and deep learning systems, have not been fully utilized as a compression technique in deep learning.

Language Modelling

Beyond Weights: Deep learning in Spiking Neural Networks with pure synaptic-delay training

1 code implementation9 Jun 2023 Edoardo W. Grappolini, Anand Subramoney

We show that training ONLY the delays in feed-forward spiking networks using backpropagation can achieve performance comparable to the more conventional weight training.

Block-local learning with probabilistic latent representations

no code implementations24 May 2023 David Kappel, Khaleelulla Khan Nazeer, Cabrel Teguemne Fokam, Christian Mayr, Anand Subramoney

In addition, back-propagation relies on the transpose of forward weight matrices to compute updates, introducing a weight transport problem across the network.

Efficient Real Time Recurrent Learning through combined activity and parameter sparsity

no code implementations10 Mar 2023 Anand Subramoney

Real-Time Recurrent Learning (RTRL) allows online learning, and the growth of required memory is independent of sequence length.

Efficient recurrent architectures through activity sparsity and sparse back-propagation through time

1 code implementation13 Jun 2022 Anand Subramoney, Khaleelulla Khan Nazeer, Mark Schöne, Christian Mayr, David Kappel

However, there is still a need to bridge the gap between what RNNs are capable of in terms of efficiency and performance and real-world application requirements.

Ranked #2 on Gesture Recognition on DVS128 Gesture (using extra training data)

Gesture Recognition Language Modelling +2

Embodied Synaptic Plasticity with Online Reinforcement learning

1 code implementation3 Mar 2020 Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann

We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.

reinforcement-learning Reinforcement Learning (RL)

Reservoirs learn to learn

no code implementations16 Sep 2019 Anand Subramoney, Franz Scherr, Wolfgang Maass

We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly.

Eligibility traces provide a data-inspired alternative to backpropagation through time

no code implementations NeurIPS Workshop Neuro_AI 2019 Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass

Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.