Search Results for author: Robert Legenstein

Found 18 papers, 11 papers with code

Adversarially Robust Spiking Neural Networks Through Conversion

1 code implementation15 Nov 2023 Ozan Özdenizci, Robert Legenstein

Spiking neural networks (SNNs) provide an energy-efficient alternative to a variety of artificial neural network (ANN) based AI applications.

Adversarial Robustness

Restoring Vision in Adverse Weather Conditions with Patch-Based Denoising Diffusion Models

1 code implementation29 Jul 2022 Ozan Özdenizci, Robert Legenstein

Image restoration under adverse weather conditions has been of significant interest for various computer vision applications.

Denoising Image Restoration +1

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

1 code implementation23 May 2022 Thomas Limbacher, Ozan Özdenizci, Robert Legenstein

Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years.

One-Shot Learning Out-of-Distribution Generalization +1

Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching

1 code implementation CVPR 2022 Ozan Özdenizci, Robert Legenstein

Experimental benchmark evaluations show that output code matching is superior to existing regularized weight quantization based defenses, and an effective defense against stealthy weight bit-flip attacks.

Quantization

Embodied Synaptic Plasticity with Online Reinforcement learning

1 code implementation3 Mar 2020 Jacques Kaiser, Michael Hoff, Andreas Konle, J. Camilo Vasquez Tieck, David Kappel, Daniel Reichard, Anand Subramoney, Robert Legenstein, Arne Roennau, Wolfgang Maass, Rudiger Dillmann

We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following.

reinforcement-learning Reinforcement Learning (RL)

Eligibility traces provide a data-inspired alternative to backpropagation through time

no code implementations NeurIPS Workshop Neuro_AI 2019 Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass

Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns.

speech-recognition Speech Recognition

Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype

no code implementations20 Mar 2019 Yexin Yan, David Kappel, Felix Neumaerker, Johannes Partzsch, Bernhard Vogginger, Sebastian Hoeppner, Steve Furber, Wolfgang Maass, Robert Legenstein, Christian Mayr

Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources.

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets

3 code implementations25 Jan 2019 Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass

This lack of understanding is linked to a lack of learning algorithms for recurrent networks of spiking neurons (RSNNs) that are both functionally powerful and can be implemented by known biological mechanisms.

Deep Rewiring: Training very sparse deep networks

4 code implementations ICLR 2018 Guillaume Bellec, David Kappel, Wolfgang Maass, Robert Legenstein

Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them.

A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning

no code implementations13 Apr 2017 David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh, Wolfgang Maass

These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have.

CaMKII activation supports reward-based neural network optimization through Hamiltonian sampling

no code implementations1 Jun 2016 Zhaofei Yu, David Kappel, Robert Legenstein, Sen Song, Feng Chen, Wolfgang Maass

Our theoretical analysis shows that stochastic search could in principle even attain optimal network configurations by emulating one of the most well-known nonlinear optimization methods, simulated annealing.

Synaptic Sampling: A Bayesian Approach to Neural Network Plasticity and Rewiring

no code implementations NeurIPS 2015 David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks.

Network Plasticity as Bayesian Inference

1 code implementation20 Apr 2015 David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference.

Bayesian Inference Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.