Search Results for author: Ashok Litwin-Kumar

Found 5 papers, 0 papers with code

Dimension of activity in random neural networks

no code implementations25 Jul 2022 David G. Clark, L. F. Abbott, Ashok Litwin-Kumar

Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.

Action-modulated midbrain dopamine activity arises from distributed control policies

no code implementations1 Jul 2022 Jack Lindsey, Ashok Litwin-Kumar

The model provides a computational account for numerous experimental findings about dopamine activity that cannot be explained by classic models of reinforcement learning in the basal ganglia.

Q-Learning reinforcement-learning +1

Learning to Learn with Feedback and Local Plasticity

no code implementations NeurIPS 2020 Jack Lindsey, Ashok Litwin-Kumar

Interest in biologically inspired alternatives to backpropagation is driven by the desire to both advance connections between deep learning and neuroscience and address backpropagation's shortcomings on tasks such as online, continual learning.

Continual Learning Meta-Learning

Evolving the Olfactory System

no code implementations NeurIPS Workshop Neuro_AI 2019 Robert Guangyu Yang, Peter Yiliu Wang, Yi Sun, Ashok Litwin-Kumar, Richard Axel, LF Abbott

In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors.

Feedback alignment in deep convolutional networks

no code implementations12 Dec 2018 Theodore H. Moskovitz, Ashok Litwin-Kumar, L. F. Abbott

We demonstrate that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation.

Cannot find the paper you are looking for? You can Submit a new open access paper.