Search Results for author: Siavash Golkar

Found 19 papers, 11 papers with code

Neuronal Temporal Filters as Normal Mode Extractors

no code implementations6 Jan 2024 Siavash Golkar, Jules Berman, David Lipshutz, Robert Mihai Haret, Tim Gollisch, Dmitri B. Chklovskii

Such variation in the temporal filter with input SNR resembles that observed experimentally in biological neurons.

Time Series

Reusability report: Prostate cancer stratification with diverse biologically-informed neural architectures

1 code implementation28 Sep 2023 Christian Pedersen, Tiberiu Tesileanu, Tinghui Wu, Siavash Golkar, Miles Cranmer, Zijun Zhang, Shirley Ho

This suggests that different neural architectures are sensitive to different aspects of the data, an important yet under-explored challenge for clinical prediction tasks.

Normative framework for deriving neural networks with multi-compartmental neurons and non-Hebbian plasticity

no code implementations20 Feb 2023 David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii

These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain.

Self-Supervised Learning

An online algorithm for contrastive Principal Component Analysis

no code implementations14 Nov 2022 Siavash Golkar, David Lipshutz, Tiberiu Tesileanu, Dmitri B. Chklovskii

However, the performance of cPCA is sensitive to hyper-parameter choice and there is currently no online algorithm for implementing cPCA.

Contrastive Learning

Constrained Predictive Coding as a Biologically Plausible Model of the Cortical Hierarchy

1 code implementation27 Oct 2022 Siavash Golkar, Tiberiu Tesileanu, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex.

Neural optimal feedback control with local learning rules

2 code implementations NeurIPS 2021 Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin, Anirvan M. Sengupta, Dmitri B. Chklovskii

This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances.

Neural circuits for dynamics-based segmentation of time series

1 code implementation24 Apr 2021 Tiberiu Tesileanu, Siavash Golkar, Samaneh Nasiri, Anirvan M. Sengupta, Dmitri B. Chklovskii

In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known.

Segmentation Time Series +1

A biologically plausible neural network for local supervision in cortical microcircuits

no code implementations30 Nov 2020 Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function.

A simple normative network approximates local non-Hebbian learning in the cortex

no code implementations NeurIPS 2020 Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii

Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.

A biologically plausible neural network for Slow Feature Analysis

1 code implementation NeurIPS 2020 David Lipshutz, Charlie Windolf, Siavash Golkar, Dmitri B. Chklovskii

Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features.

Hippocampus Time Series +1

A biologically plausible neural network for multi-channel Canonical Correlation Analysis

1 code implementation1 Oct 2020 David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, Dmitri B. Chklovskii

For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.

Emergent Structures and Lifetime Structure Evolution in Artificial Neural Networks

no code implementations NeurIPS Workshop Neuro_AI 2019 Siavash Golkar

These different structures can be derived using gradient descent on a single general loss function where the structure of the data and the relative strengths of various regulator terms determine the structure of the emergent network.

Task-Driven Data Verification via Gradient Descent

no code implementations14 May 2019 Siavash Golkar, Kyunghyun Cho

We introduce a novel algorithm for the detection of possible sample corruption such as mislabeled samples in a training dataset given a small clean validation set.

Inferring the quantum density matrix with machine learning

no code implementations11 Apr 2019 Kyle Cranmer, Siavash Golkar, Duccio Pappadopulo

We also introduce quantum flows, the quantum analog of normalizing flows, which can be used to increase the expressivity of this variational family.

BIG-bench Machine Learning Variational Inference

Continual Learning via Neural Pruning

1 code implementation11 Mar 2019 Siavash Golkar, Michael Kagan, Kyunghyun Cho

We introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at lifelong learning in fixed capacity models based on neuronal model sparsification.

Continual Learning

Backdrop: Stochastic Backpropagation

1 code implementation ICLR 2019 Siavash Golkar, Kyle Cranmer

We introduce backdrop, a flexible and simple-to-implement method, intuitively described as dropout acting only along the backpropagation pipeline.

Cannot find the paper you are looking for? You can Submit a new open access paper.