Search Results for author: Rainer Engelken

Found 4 papers, 2 papers with code

SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks

1 code implementation NeurIPS 2023 Rainer Engelken

In this paper, we introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.

Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians

1 code implementation NeurIPS 2023 Rainer Engelken

For challenging tasks, we show that gradient flossing during training can further increase the time horizon that can be bridged by backpropagation through time.

Input correlations impede suppression of chaos and learning in balanced rate networks

no code implementations24 Jan 2022 Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L. F. Abbott

To study this phenomenon we develop a non-stationary dynamic mean-field theory that determines how the activity statistics and largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input.

Curriculum learning as a tool to uncover learning principles in the brain

no code implementations ICLR 2022 Daniel R. Kepple, Rainer Engelken, Kanaka Rajan

Using recurrent neural networks (RNNs) and models of common experimental neuroscience tasks, we demonstrate that curricula can be used to differentiate learning principles using target-based and a representation-based loss functions as use cases.

Cannot find the paper you are looking for? You can Submit a new open access paper.