no code implementations • 25 Jul 2022 • David G. Clark, L. F. Abbott, Ashok Litwin-Kumar
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.
no code implementations • 1 Jul 2022 • Jack Lindsey, Ashok Litwin-Kumar
The model provides a computational account for numerous experimental findings about dopamine activity that cannot be explained by classic models of reinforcement learning in the basal ganglia.
no code implementations • NeurIPS 2020 • Jack Lindsey, Ashok Litwin-Kumar
Interest in biologically inspired alternatives to backpropagation is driven by the desire to both advance connections between deep learning and neuroscience and address backpropagation's shortcomings on tasks such as online, continual learning.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Robert Guangyu Yang, Peter Yiliu Wang, Yi Sun, Ashok Litwin-Kumar, Richard Axel, LF Abbott
In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors.
no code implementations • 12 Dec 2018 • Theodore H. Moskovitz, Ashok Litwin-Kumar, L. F. Abbott
We demonstrate that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation.