Search Results for author: L. F. Abbott

Found 13 papers, 4 papers with code

Theory of coupled neuronal-synaptic dynamics

no code implementations17 Feb 2023 David G. Clark, L. F. Abbott

In chaotic states with strong Hebbian plasticity, a stable fixed point of neuronal dynamics is destabilized by synaptic dynamics, allowing any neuronal state to be stored as a stable fixed point by halting the plasticity.

Dimension of activity in random neural networks

no code implementations25 Jul 2022 David G. Clark, L. F. Abbott, Ashok Litwin-Kumar

Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

1 code implementation5 Feb 2022 Samuel Lippl, L. F. Abbott, SueYeon Chung

Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance.

Inductive Bias

Input correlations impede suppression of chaos and learning in balanced rate networks

no code implementations24 Jan 2022 Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L. F. Abbott

To study this phenomenon we develop a non-stationary dynamic mean-field theory that determines how the activity statistics and largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input.

Credit Assignment Through Broadcasting a Global Error Vector

1 code implementation NeurIPS 2021 David G. Clark, L. F. Abbott, SueYeon Chung

We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment.

Neural population geometry: An approach for understanding biological and artificial neural networks

no code implementations14 Apr 2021 SueYeon Chung, L. F. Abbott

One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i. e., neural population geometry.

BIG-bench Machine Learning Disentanglement

Training dynamically balanced excitatory-inhibitory networks

no code implementations29 Dec 2018 Alessandro Ingrosso, L. F. Abbott

The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system.

Feedback alignment in deep convolutional networks

no code implementations12 Dec 2018 Theodore H. Moskovitz, Ashok Litwin-Kumar, L. F. Abbott

We demonstrate that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation.

full-FORCE: A Target-Based Method for Training Recurrent Networks

1 code implementation9 Oct 2017 Brian DePasquale, Christopher J. Cueva, Kanaka Rajan, G. Sean Escola, L. F. Abbott

We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations.

Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity

no code implementations3 May 2017 Ran Rubin, L. F. Abbott, Haim Sompolinsky

To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks.

LFADS - Latent Factor Analysis via Dynamical Systems

no code implementations22 Aug 2016 David Sussillo, Rafal Jozefowicz, L. F. Abbott, Chethan Pandarinath

Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously.

Using Firing-Rate Dynamics to Train Recurrent Networks of Spiking Model Neurons

1 code implementation28 Jan 2016 Brian DePasquale, Mark M. Churchland, L. F. Abbott

Recurrent neural networks are powerful tools for understanding and modeling computation and representation by populations of neurons.

Neurons and Cognition

Random Walk Initialization for Training Very Deep Feedforward Networks

no code implementations19 Dec 2014 David Sussillo, L. F. Abbott

We show that the successive application of correctly scaled random matrices to an initial vector results in a random walk of the log of the norm of the resulting vectors, and we compute the scaling that makes this walk unbiased.

Cannot find the paper you are looking for? You can Submit a new open access paper.