1 code implementation • 25 Feb 2025 • Victor Geadah, Amin Nejatbakhsh, David Lipshutz, Jonathan W. Pillow, Alex H. Williams
Neural population activity exhibits complex, nonlinear dynamics, varying in time, over trials, and across experimental conditions.
no code implementations • 19 Dec 2024 • Amin Nejatbakhsh, Victor Geadah, Alex H. Williams, David Lipshutz
Biological and artificial neural systems form high-dimensional neural representations that underpin their computational capabilities.
no code implementations • 12 Nov 2024 • Sarah E. Harvey, David Lipshutz, Alex H. Williams
Neural responses encode information that is useful for a variety of downstream tasks.
no code implementations • 20 Oct 2024 • Jenelle Feather, David Lipshutz, Sarah E. Harvey, Alex H. Williams, Eero P. Simoncelli
This metric may then be used to optimally differentiate a set of models, by finding a pair of "principal distortions" that maximize the variance of the models under this metric.
1 code implementation • 28 May 2024 • David Lipshutz, Eero P. Simoncelli
The circuit, which is comprised of primary neurons that are recurrently connected to a set of local interneurons, continuously optimizes this objective by dynamically adjusting both the synaptic connections between neurons as well as the interneuron activation functions.
no code implementations • 6 Jan 2024 • Siavash Golkar, Jules Berman, David Lipshutz, Robert Mihai Haret, Tim Gollisch, Dmitri B. Chklovskii
Such variation in the temporal filter with input SNR resembles that observed experimentally in biological neurons.
1 code implementation • NeurIPS 2023 • Lyndon R. Duong, Eero P. Simoncelli, Dmitri B. Chklovskii, David Lipshutz
Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses.
no code implementations • 20 Feb 2023 • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii
These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain.
1 code implementation • 27 Jan 2023 • Lyndon R. Duong, David Lipshutz, David J. Heeger, Dmitri B. Chklovskii, Eero P. Simoncelli
Statistical whitening transformations play a fundamental role in many computational systems, and may also play an important role in biological sensory systems.
no code implementations • 14 Nov 2022 • Siavash Golkar, David Lipshutz, Tiberiu Tesileanu, Dmitri B. Chklovskii
However, the performance of cPCA is sensitive to hyper-parameter choice and there is currently no online algorithm for implementing cPCA.
no code implementations • 21 Sep 2022 • David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii
To this end, we consider two mathematically tractable recurrent linear neural networks that statistically whiten their inputs -- one with direct recurrent connections and the other with interneurons that mediate recurrent communication.
no code implementations • 30 Nov 2020 • Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function.
no code implementations • NeurIPS 2020 • Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
1 code implementation • 23 Oct 2020 • David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii
To model how the brain performs this task, we seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm.
1 code implementation • NeurIPS 2020 • David Lipshutz, Charlie Windolf, Siavash Golkar, Dmitri B. Chklovskii
Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features.
1 code implementation • 1 Oct 2020 • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii
For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.