Search Results for author: David Kappel

Found 14 papers, 4 papers with code

Language Modeling on a SpiNNaker 2 Neuromorphic Chip

no code implementations14 Dec 2023 Khaleelulla Khan Nazeer, Mark Schöne, Rishav Mukherji, Bernhard Vogginger, Christian Mayr, David Kappel, Anand Subramoney

In this work, we demonstrate the first-ever implementation of a language model on a neuromorphic device - specifically the SpiNNaker 2 chip - based on a recently published event-based architecture called the EGRU.

Gesture Recognition Language Modelling

Block-local learning with probabilistic latent representations

no code implementations24 May 2023 David Kappel, Khaleelulla Khan Nazeer, Cabrel Teguemne Fokam, Christian Mayr, Anand Subramoney

In addition, back-propagation relies on the transpose of forward weight matrices to compute updates, introducing a weight transport problem across the network.

Efficient recurrent architectures through activity sparsity and sparse back-propagation through time

1 code implementation13 Jun 2022 Anand Subramoney, Khaleelulla Khan Nazeer, Mark Schöne, Christian Mayr, David Kappel

However, there is still a need to bridge the gap between what RNNs are capable of in terms of efficiency and performance and real-world application requirements.

Ranked #2 on Gesture Recognition on DVS128 Gesture (using extra training data)

Gesture Recognition Language Modelling +2

Continual Learning with Memory Cascades

no code implementations NeurIPS Workshop ICBINB 2021 David Kappel, Franscesco Negri, Christian Tetzlaff

This general formulation allows us to use the model also for online learning where no knowledge about task switching times is given to the network.

Continual Learning Permuted-MNIST

A synapse-centric account of the free energy principle

no code implementations23 Mar 2021 David Kappel, Christian Tetzlaff

The free energy principle (FEP) is a mathematical framework that describes how biological systems self-organize and survive in their environment.

Embodied Synaptic Plasticity with Online Reinforcement learning

1 code implementation3 Mar 2020 Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann

We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.

reinforcement-learning Reinforcement Learning (RL)

Structural plasticity on an accelerated analog neuromorphic hardware system

no code implementations27 Dec 2019 Sebastian Billaudelle, Benjamin Cramer, Mihai A. Petrovici, Korbinian Schreiber, David Kappel, Johannes Schemmel, Karlheinz Meier

In computational neuroscience, as well as in machine learning, neuromorphic devices promise an accelerated and scalable alternative to neural network simulations.

Computational Efficiency

Attention on Abstract Visual Reasoning

no code implementations14 Nov 2019 Lukas Hahne, Timo Lüddecke, Florentin Wörgötter, David Kappel

Our proposed hybrid model, represents an alternative on learning abstract relations using self-attention and demonstrates that the Transformer network is also well suited for abstract visual reasoning.

Program induction Relation +3

Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype

no code implementations20 Mar 2019 Yexin Yan, David Kappel, Felix Neumaerker, Johannes Partzsch, Bernhard Vogginger, Sebastian Hoeppner, Steve Furber, Wolfgang Maass, Robert Legenstein, Christian Mayr

Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources.

Deep Rewiring: Training very sparse deep networks

4 code implementations ICLR 2018 Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein

Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.

A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning

no code implementations13 Apr 2017 David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh, Wolfgang Maass

These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have.

CaMKII activation supports reward-based neural network optimization through Hamiltonian sampling

no code implementations1 Jun 2016 Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass

Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.

Synaptic Sampling: A Bayesian Approach to Neural Network Plasticity and Rewiring

no code implementations NeurIPS 2015 David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks.

Network Plasticity as Bayesian Inference

1 code implementation20 Apr 2015 David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference.

Bayesian Inference Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.