no code implementations • 6 Jan 2024 • Siavash Golkar, Jules Berman, David Lipshutz, Robert Mihai Haret, Tim Gollisch, Dmitri B. Chklovskii
Such variation in the temporal filter with input SNR resembles that observed experimentally in biological neurons.
no code implementations • 3 Jan 2024 • Jason Moore, Alexander Genkin, Magnus Tournoy, Joshua Pughe-Sanford, Rob R. de Ruyter van Steveninck, Dmitri B. Chklovskii
In the quest to model neuronal function amidst gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective.
1 code implementation • NeurIPS 2023 • Lyndon R. Duong, Eero P. Simoncelli, Dmitri B. Chklovskii, David Lipshutz
Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses.
no code implementations • 2 Aug 2023 • Yanis Bahroun, Shagesh Sridharan, Atithi Acharya, Dmitri B. Chklovskii, Anirvan M. Sengupta
This study focuses on the primarily unsupervised similarity matching (SM) framework, which aligns with observed mechanisms in biological systems and offers online, localized, and biologically plausible algorithms.
no code implementations • 2 Aug 2023 • Yanis Bahroun, Dmitri B. Chklovskii, Anirvan M. Sengupta
In this work, we focus not on developing new algorithms but on showing that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.
no code implementations • 20 Feb 2023 • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii
These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain.
1 code implementation • 27 Jan 2023 • Lyndon R. Duong, David Lipshutz, David J. Heeger, Dmitri B. Chklovskii, Eero P. Simoncelli
Statistical whitening transformations play a fundamental role in many computational systems, and may also play an important role in biological sensory systems.
no code implementations • 14 Nov 2022 • Siavash Golkar, David Lipshutz, Tiberiu Tesileanu, Dmitri B. Chklovskii
However, the performance of cPCA is sensitive to hyper-parameter choice and there is currently no online algorithm for implementing cPCA.
1 code implementation • 27 Oct 2022 • Siavash Golkar, Tiberiu Tesileanu, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex.
no code implementations • 21 Sep 2022 • David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii
To this end, we consider two mathematically tractable recurrent linear neural networks that statistically whiten their inputs -- one with direct recurrent connections and the other with interneurons that mediate recurrent communication.
no code implementations • 3 Dec 2021 • Jules Berman, Dmitri B. Chklovskii, Jingpeng Wu
To address this problem, we propose a novel method based on point cloud representations of neurons.
2 code implementations • NeurIPS 2021 • Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin, Anirvan M. Sengupta, Dmitri B. Chklovskii
This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances.
1 code implementation • 24 Apr 2021 • Tiberiu Tesileanu, Siavash Golkar, Samaneh Nasiri, Anirvan M. Sengupta, Dmitri B. Chklovskii
In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known.
no code implementations • 10 Feb 2021 • Yanis Bahroun, Dmitri B. Chklovskii
However, no biologically plausible networks exist for minor subspace analysis (MSA), a fundamental signal processing task.
no code implementations • 10 Feb 2021 • Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules.
no code implementations • 30 Nov 2020 • Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function.
1 code implementation • 23 Oct 2020 • David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii
To model how the brain performs this task, we seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm.
1 code implementation • NeurIPS 2020 • David Lipshutz, Charlie Windolf, Siavash Golkar, Dmitri B. Chklovskii
Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features.
no code implementations • NeurIPS 2020 • Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
1 code implementation • 1 Oct 2020 • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii
For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.
no code implementations • 5 Aug 2019 • Cengiz Pehlevan, Dmitri B. Chklovskii
Although the currently popular deep learning networks achieve unprecedented performance on some tasks, the human brain still has a monopoly on general intelligence.
no code implementations • 6 Aug 2018 • Andrea Giovannucci, Victor Minden, Cengiz Pehlevan, Dmitri B. Chklovskii
Big data problems frequently require processing datasets in a streaming fashion, either because all data are available at once but collectively are larger than available memory or because the data intrinsically arrive one data point at a time and must be processed online.
no code implementations • 1 Jun 2017 • Cengiz Pehlevan, Sreyas Mohan, Dmitri B. Chklovskii
Blind source separation, i. e. extraction of independent sources from a mixture, is an important problem for both artificial and natural signal processing.
no code implementations • 23 Mar 2017 • Cengiz Pehlevan, Anirvan Sengupta, Dmitri B. Chklovskii
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience.
no code implementations • 11 Dec 2016 • Yuansi Chen, Cengiz Pehlevan, Dmitri B. Chklovskii
Here we propose online algorithms where the threshold is self-calibrating based on the singular values computed from the existing observations.
no code implementations • 30 Nov 2015 • Cengiz Pehlevan, Dmitri B. Chklovskii
Here, we focus on such workhorses of signal processing as Principal Component Analysis (PCA) and whitening which maximize information transmission in the presence of noise.
no code implementations • NeurIPS 2015 • Cengiz Pehlevan, Dmitri B. Chklovskii
Here, we derive biologically plausible dimensionality reduction algorithms which adapt the number of output dimensions to the eigenspectrum of the input covariance matrix.
2 code implementations • 2 Mar 2015 • Cengiz Pehlevan, Dmitri B. Chklovskii
Despite our extensive knowledge of biophysical properties of neurons, there is no commonly accepted algorithmic theory of neuronal function.
no code implementations • 2 Mar 2015 • Tao Hu, Cengiz Pehlevan, Dmitri B. Chklovskii
Here, to overcome this problem, we derive sparse dictionary learning from a novel cost-function - a regularized error of the symmetric factorization of the input's similarity matrix.
no code implementations • 2 Mar 2015 • Cengiz Pehlevan, Tao Hu, Dmitri B. Chklovskii
Such networks learn the principal subspace, in the sense of principal component analysis (PCA), by adjusting synaptic weights according to activity-dependent learning rules.
no code implementations • 12 May 2014 • Tao Hu, Zaid J. Towfic, Cengiz Pehlevan, Alex Genkin, Dmitri B. Chklovskii
Here we propose to view a neuron as a signal processing device that represents the incoming streaming data matrix as a sparse vector of synaptic weights scaled by an outgoing sparse activity vector.
1 code implementation • 25 Mar 2013 • Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, Dmitri B. Chklovskii
We aim to improve segmentation through the use of machine learning tools during region agglomeration.
no code implementations • NeurIPS 2012 • Shaul Druckmann, Tao Hu, Dmitri B. Chklovskii
However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear.
no code implementations • NeurIPS 2012 • Karol Gregor, Dmitri B. Chklovskii
Early stages of visual processing are thought to decorrelate, or whiten, the incoming temporally varying signals.
no code implementations • NeurIPS 2012 • Dmitri B. Chklovskii, Daniel Soudry
If noise-shaping were used in neurons, it would introduce correlations in spike timing to reduce low-frequency (up to Nyquist) transmission error at the cost of high-frequency one (from Nyquist to sampling rate).
no code implementations • NeurIPS 2010 • Shaul Druckmann, Dmitri B. Chklovskii
A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy.
no code implementations • NeurIPS 2009 • Tao Hu, Anthony Leonardo, Dmitri B. Chklovskii
One of the central problems in neuroscience is reconstructing synaptic connectivity in neural circuits.