Search Results for author: Dmitri B. Chklovskii

Found 37 papers, 10 papers with code

Neuronal Temporal Filters as Normal Mode Extractors

no code implementations6 Jan 2024 Siavash Golkar, Jules Berman, David Lipshutz, Robert Mihai Haret, Tim Gollisch, Dmitri B. Chklovskii

Such variation in the temporal filter with input SNR resembles that observed experimentally in biological neurons.

Time Series

The Neuron as a Direct Data-Driven Controller

no code implementations3 Jan 2024 Jason Moore, Alexander Genkin, Magnus Tournoy, Joshua Pughe-Sanford, Rob R. de Ruyter van Steveninck, Dmitri B. Chklovskii

In the quest to model neuronal function amidst gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective.

Adaptive whitening with fast gain modulation and slow synaptic plasticity

1 code implementation NeurIPS 2023 Lyndon R. Duong, Eero P. Simoncelli, Dmitri B. Chklovskii, David Lipshutz

Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses.

Unlocking the Potential of Similarity Matching: Scalability, Supervision and Pre-training

no code implementations2 Aug 2023 Yanis Bahroun, Shagesh Sridharan, Atithi Acharya, Dmitri B. Chklovskii, Anirvan M. Sengupta

This study focuses on the primarily unsupervised similarity matching (SM) framework, which aligns with observed mechanisms in biological systems and offers online, localized, and biologically plausible algorithms.

Computational Efficiency

Duality Principle and Biologically Plausible Learning: Connecting the Representer Theorem and Hebbian Learning

no code implementations2 Aug 2023 Yanis Bahroun, Dmitri B. Chklovskii, Anirvan M. Sengupta

In this work, we focus not on developing new algorithms but on showing that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.

Normative framework for deriving neural networks with multi-compartmental neurons and non-Hebbian plasticity

no code implementations20 Feb 2023 David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii

These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain.

Self-Supervised Learning

Adaptive whitening in neural populations with gain-modulating interneurons

1 code implementation27 Jan 2023 Lyndon R. Duong, David Lipshutz, David J. Heeger, Dmitri B. Chklovskii, Eero P. Simoncelli

Statistical whitening transformations play a fundamental role in many computational systems, and may also play an important role in biological sensory systems.

An online algorithm for contrastive Principal Component Analysis

no code implementations14 Nov 2022 Siavash Golkar, David Lipshutz, Tiberiu Tesileanu, Dmitri B. Chklovskii

However, the performance of cPCA is sensitive to hyper-parameter choice and there is currently no online algorithm for implementing cPCA.

Contrastive Learning

Constrained Predictive Coding as a Biologically Plausible Model of the Cortical Hierarchy

1 code implementation27 Oct 2022 Siavash Golkar, Tiberiu Tesileanu, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex.

Interneurons accelerate learning dynamics in recurrent neural networks for statistical adaptation

no code implementations21 Sep 2022 David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii

To this end, we consider two mathematically tractable recurrent linear neural networks that statistically whiten their inputs -- one with direct recurrent connections and the other with interneurons that mediate recurrent communication.

Bridging the Gap: Point Clouds for Merging Neurons in Connectomics

no code implementations3 Dec 2021 Jules Berman, Dmitri B. Chklovskii, Jingpeng Wu

To address this problem, we propose a novel method based on point cloud representations of neurons.

Point Cloud Classification

Neural optimal feedback control with local learning rules

2 code implementations NeurIPS 2021 Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin, Anirvan M. Sengupta, Dmitri B. Chklovskii

This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances.

Neural circuits for dynamics-based segmentation of time series

1 code implementation24 Apr 2021 Tiberiu Tesileanu, Siavash Golkar, Samaneh Nasiri, Anirvan M. Sengupta, Dmitri B. Chklovskii

In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known.

Segmentation Time Series +1

A Neural Network with Local Learning Rules for Minor Subspace Analysis

no code implementations10 Feb 2021 Yanis Bahroun, Dmitri B. Chklovskii

However, no biologically plausible networks exist for minor subspace analysis (MSA), a fundamental signal processing task.

Clustering

A biologically plausible neural network for local supervision in cortical microcircuits

no code implementations30 Nov 2020 Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function.

Biologically plausible single-layer networks for nonnegative independent component analysis

1 code implementation23 Oct 2020 David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii

To model how the brain performs this task, we seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm.

blind source separation

A biologically plausible neural network for Slow Feature Analysis

1 code implementation NeurIPS 2020 David Lipshutz, Charlie Windolf, Siavash Golkar, Dmitri B. Chklovskii

Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features.

Hippocampus Time Series +1

A simple normative network approximates local non-Hebbian learning in the cortex

no code implementations NeurIPS 2020 Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.

A biologically plausible neural network for multi-channel Canonical Correlation Analysis

1 code implementation1 Oct 2020 David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii

For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.

Neuroscience-inspired online unsupervised learning algorithms

no code implementations5 Aug 2019 Cengiz Pehlevan, Dmitri B. Chklovskii

Although the currently popular deep learning networks achieve unprecedented performance on some tasks, the human brain still has a monopoly on general intelligence.

Clustering Dimensionality Reduction

Efficient Principal Subspace Projection of Streaming Data Through Fast Similarity Matching

no code implementations6 Aug 2018 Andrea Giovannucci, Victor Minden, Cengiz Pehlevan, Dmitri B. Chklovskii

Big data problems frequently require processing datasets in a streaming fashion, either because all data are available at once but collectively are larger than available memory or because the data intrinsically arrive one data point at a time and must be processed online.

Dimensionality Reduction

Blind nonnegative source separation using biological neural networks

no code implementations1 Jun 2017 Cengiz Pehlevan, Sreyas Mohan, Dmitri B. Chklovskii

Blind source separation, i. e. extraction of independent sources from a mixture, is an important problem for both artificial and natural signal processing.

blind source separation

Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?

no code implementations23 Mar 2017 Cengiz Pehlevan, Anirvan Sengupta, Dmitri B. Chklovskii

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience.

Dimensionality Reduction

Self-calibrating Neural Networks for Dimensionality Reduction

no code implementations11 Dec 2016 Yuansi Chen, Cengiz Pehlevan, Dmitri B. Chklovskii

Here we propose online algorithms where the threshold is self-calibrating based on the singular values computed from the existing observations.

Dimensionality Reduction

Optimization theory of Hebbian/anti-Hebbian networks for PCA and whitening

no code implementations30 Nov 2015 Cengiz Pehlevan, Dmitri B. Chklovskii

Here, we focus on such workhorses of signal processing as Principal Component Analysis (PCA) and whitening which maximize information transmission in the presence of noise.

A Normative Theory of Adaptive Dimensionality Reduction in Neural Networks

no code implementations NeurIPS 2015 Cengiz Pehlevan, Dmitri B. Chklovskii

Here, we derive biologically plausible dimensionality reduction algorithms which adapt the number of output dimensions to the eigenspectrum of the input covariance matrix.

Dimensionality Reduction

A Hebbian/Anti-Hebbian Network Derived from Online Non-Negative Matrix Factorization Can Cluster and Discover Sparse Features

2 code implementations2 Mar 2015 Cengiz Pehlevan, Dmitri B. Chklovskii

Despite our extensive knowledge of biophysical properties of neurons, there is no commonly accepted algorithmic theory of neuronal function.

Anatomy Clustering

A Hebbian/Anti-Hebbian Network for Online Sparse Dictionary Learning Derived from Symmetric Matrix Factorization

no code implementations2 Mar 2015 Tao Hu, Cengiz Pehlevan, Dmitri B. Chklovskii

Here, to overcome this problem, we derive sparse dictionary learning from a novel cost-function - a regularized error of the symmetric factorization of the input's similarity matrix.

Dictionary Learning

A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data

no code implementations2 Mar 2015 Cengiz Pehlevan, Tao Hu, Dmitri B. Chklovskii

Such networks learn the principal subspace, in the sense of principal component analysis (PCA), by adjusting synaptic weights according to activity-dependent learning rules.

A Neuron as a Signal Processing Device

no code implementations12 May 2014 Tao Hu, Zaid J. Towfic, Cengiz Pehlevan, Alex Genkin, Dmitri B. Chklovskii

Here we propose to view a neuron as a signal processing device that represents the incoming streaming data matrix as a sparse vector of synaptic weights scaled by an outgoing sparse activity vector.

A mechanistic model of early sensory processing based on subtracting sparse representations

no code implementations NeurIPS 2012 Shaul Druckmann, Tao Hu, Dmitri B. Chklovskii

However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear.

A lattice filter model of the visual pathway

no code implementations NeurIPS 2012 Karol Gregor, Dmitri B. Chklovskii

Early stages of visual processing are thought to decorrelate, or whiten, the incoming temporally varying signals.

Neuronal Spike Generation Mechanism as an Oversampling, Noise-shaping A-to-D converter

no code implementations NeurIPS 2012 Dmitri B. Chklovskii, Daniel Soudry

If noise-shaping were used in neurons, it would introduce correlations in spike timing to reduce low-frequency (up to Nyquist) transmission error at the cost of high-frequency one (from Nyquist to sampling rate).

Over-complete representations on recurrent neural networks can support persistent percepts

no code implementations NeurIPS 2010 Shaul Druckmann, Dmitri B. Chklovskii

A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy.

Cannot find the paper you are looking for? You can Submit a new open access paper.