no code implementations • 17 Feb 2023 • David G. Clark, L. F. Abbott
In chaotic states with strong Hebbian plasticity, a stable fixed point of neuronal dynamics is destabilized by synaptic dynamics, allowing any neuronal state to be stored as a stable fixed point by halting the plasticity.
no code implementations • 25 Jul 2022 • David G. Clark, L. F. Abbott, Ashok Litwin-Kumar
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.
1 code implementation • 5 Feb 2022 • Samuel Lippl, L. F. Abbott, SueYeon Chung
Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance.
no code implementations • 24 Jan 2022 • Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L. F. Abbott
To study this phenomenon we develop a non-stationary dynamic mean-field theory that determines how the activity statistics and largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input.
1 code implementation • NeurIPS 2021 • David G. Clark, L. F. Abbott, SueYeon Chung
We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment.
no code implementations • 14 Apr 2021 • SueYeon Chung, L. F. Abbott
One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i. e., neural population geometry.
no code implementations • 29 Dec 2018 • Alessandro Ingrosso, L. F. Abbott
The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system.
no code implementations • 12 Dec 2018 • Theodore H. Moskovitz, Ashok Litwin-Kumar, L. F. Abbott
We demonstrate that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation.
1 code implementation • 9 Oct 2017 • Brian DePasquale, Christopher J. Cueva, Kanaka Rajan, G. Sean Escola, L. F. Abbott
We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations.
no code implementations • 3 May 2017 • Ran Rubin, L. F. Abbott, Haim Sompolinsky
To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks.
no code implementations • 22 Aug 2016 • David Sussillo, Rafal Jozefowicz, L. F. Abbott, Chethan Pandarinath
Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously.
1 code implementation • 28 Jan 2016 • Brian DePasquale, Mark M. Churchland, L. F. Abbott
Recurrent neural networks are powerful tools for understanding and modeling computation and representation by populations of neurons.
Neurons and Cognition
no code implementations • 19 Dec 2014 • David Sussillo, L. F. Abbott
We show that the successive application of correctly scaled random matrices to an initial vector results in a random walk of the log of the norm of the resulting vectors, and we compute the scaling that makes this walk unbiased.