no code implementations • 2 Aug 2023 • Yanis Bahroun, Shagesh Sridharan, Atithi Acharya, Dmitri B. Chklovskii, Anirvan M. Sengupta
This study focuses on the primarily unsupervised similarity matching (SM) framework, which aligns with observed mechanisms in biological systems and offers online, localized, and biologically plausible algorithms.
no code implementations • 2 Aug 2023 • Yanis Bahroun, Dmitri B. Chklovskii, Anirvan M. Sengupta
In this work, we focus not on developing new algorithms but on showing that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.
no code implementations • 20 Feb 2023 • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii
These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain.
1 code implementation • 27 Oct 2022 • Siavash Golkar, Tiberiu Tesileanu, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex.
2 code implementations • NeurIPS 2021 • Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin, Anirvan M. Sengupta, Dmitri B. Chklovskii
This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances.
1 code implementation • 24 Apr 2021 • Tiberiu Tesileanu, Siavash Golkar, Samaneh Nasiri, Anirvan M. Sengupta, Dmitri B. Chklovskii
In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known.
no code implementations • 10 Feb 2021 • Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules.
no code implementations • 30 Nov 2020 • Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function.
no code implementations • NeurIPS 2020 • Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
1 code implementation • 1 Oct 2020 • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii
For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.
no code implementations • 21 Aug 2019 • Alexander Genkin, Anirvan M. Sengupta, Dmitri Chklovskii
Here, we propose a feed-forward neural network capable of semi-supervised learning on manifolds without using an explicit graph representation.
no code implementations • 19 Jun 2017 • Mariano Tepper, Anirvan M. Sengupta, Dmitri Chklovskii
In solving hard computational problems, semidefinite program (SDP) relaxations often play an important role because they come with a guarantee of optimality.