1 code implementation • NeurIPS 2023 • Rainer Engelken
In this paper, we introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
1 code implementation • NeurIPS 2023 • Rainer Engelken
For challenging tasks, we show that gradient flossing during training can further increase the time horizon that can be bridged by backpropagation through time.
no code implementations • 24 Jan 2022 • Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L. F. Abbott
To study this phenomenon we develop a non-stationary dynamic mean-field theory that determines how the activity statistics and largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input.
no code implementations • ICLR 2022 • Daniel R. Kepple, Rainer Engelken, Kanaka Rajan
Using recurrent neural networks (RNNs) and models of common experimental neuroscience tasks, we demonstrate that curricula can be used to differentiate learning principles using target-based and a representation-based loss functions as use cases.