Search Results for author: Anirvan M. Sengupta

Found 12 papers, 4 papers with code

Unlocking the Potential of Similarity Matching: Scalability, Supervision and Pre-training

no code implementations2 Aug 2023 Yanis Bahroun, Shagesh Sridharan, Atithi Acharya, Dmitri B. Chklovskii, Anirvan M. Sengupta

This study focuses on the primarily unsupervised similarity matching (SM) framework, which aligns with observed mechanisms in biological systems and offers online, localized, and biologically plausible algorithms.

Computational Efficiency

Duality Principle and Biologically Plausible Learning: Connecting the Representer Theorem and Hebbian Learning

no code implementations2 Aug 2023 Yanis Bahroun, Dmitri B. Chklovskii, Anirvan M. Sengupta

In this work, we focus not on developing new algorithms but on showing that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.

Normative framework for deriving neural networks with multi-compartmental neurons and non-Hebbian plasticity

no code implementations20 Feb 2023 David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii

These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain.

Self-Supervised Learning

Constrained Predictive Coding as a Biologically Plausible Model of the Cortical Hierarchy

1 code implementation27 Oct 2022 Siavash Golkar, Tiberiu Tesileanu, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex.

Neural optimal feedback control with local learning rules

2 code implementations NeurIPS 2021 Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin, Anirvan M. Sengupta, Dmitri B. Chklovskii

This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances.

Neural circuits for dynamics-based segmentation of time series

1 code implementation24 Apr 2021 Tiberiu Tesileanu, Siavash Golkar, Samaneh Nasiri, Anirvan M. Sengupta, Dmitri B. Chklovskii

In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known.

Segmentation Time Series +1

A biologically plausible neural network for local supervision in cortical microcircuits

no code implementations30 Nov 2020 Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function.

A simple normative network approximates local non-Hebbian learning in the cortex

no code implementations NeurIPS 2020 Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.

A biologically plausible neural network for multi-channel Canonical Correlation Analysis

1 code implementation1 Oct 2020 David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii

For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.

A Neural Network for Semi-Supervised Learning on Manifolds

no code implementations21 Aug 2019 Alexander Genkin, Anirvan M. Sengupta, Dmitri Chklovskii

Here, we propose a feed-forward neural network capable of semi-supervised learning on manifolds without using an explicit graph representation.

Clustering is semidefinitely not that hard: Nonnegative SDP for manifold disentangling

no code implementations19 Jun 2017 Mariano Tepper, Anirvan M. Sengupta, Dmitri Chklovskii

In solving hard computational problems, semidefinite program (SDP) relaxations often play an important role because they come with a guarantee of optimality.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.