Search Results for author: Cristina Savin

Found 20 papers, 6 papers with code

Desiderata for normative models of synaptic plasticity

no code implementations9 Aug 2023 Colin Bredenberg, Cristina Savin

In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions.

A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation

1 code implementation6 Sep 2022 Edoardo Balzani, Jean Paul Noel, Pedro Herrero-Vidal, Dora E. Angelaki, Cristina Savin

Latent manifolds provide a compact characterization of neural population activity and of shared co-variability across brain areas.

A sampling-based circuit for optimal decision making

no code implementations NeurIPS 2021 Camille Rullán Buxó, Cristina Savin

Many features of human and animal behavior can be understood in the framework of Bayesian inference and optimal decision making, but the biological substrate of such processes is not fully understood.

Bayesian Inference Decision Making +1

Impression learning: Online representation learning with synaptic plasticity

1 code implementation NeurIPS 2021 Colin Bredenberg, Benjamin Lyo, Eero Simoncelli, Cristina Savin

Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience.

Bayesian Inference Representation Learning

Across-animal odor decoding by probabilistic manifold alignment

1 code implementation NeurIPS 2021 Pedro Herrero-Vidal, Dmitry Rinberg, Cristina Savin

Identifying the common structure of neural dynamics across subjects is key for extracting unifying principles of brain computation and for many brain machine interface applications.


Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece

no code implementations12 May 2021 Luke Y. Prince, Roy Henha Eyono, Ellen Boven, Arna Ghosh, Joe Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass, Blake A. Richards, Cristina Savin, Katharina Anna Wilmes

We provide a brief review of the common assumptions about biological learning with findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks.

Online hyperparameter optimization by real-time recurrent learning

1 code implementation15 Feb 2021 Daniel Jiwoong Im, Cristina Savin, Kyunghyun Cho

Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning.

Hyperparameter Optimization

Efficient estimation of neural tuning during naturalistic behavior

1 code implementation NeurIPS 2020 Edoardo Balzani, Kaushik Lakshminarasimhan, Dora Angelaki, Cristina Savin

Recent technological advances in systems neuroscience have led to a shift away from using simple tasks, with low-dimensional, well-controlled stimuli, towards trying to understand neural activity during naturalistic behavior.

Learning efficient task-dependent representations with synaptic plasticity

1 code implementation NeurIPS 2020 Colin Bredenberg, Eero Simoncelli, Cristina Savin

Neural populations encode the sensory world imperfectly: their capacity is limited by the number of neurons, availability of metabolic and other biophysical resources, and intrinsic noise.

Flexible information routing in neural populations through stochastic comodulation

no code implementations NeurIPS 2019 Caroline Haimerl, Cristina Savin, Eero Simoncelli

It has been observed that trial-to-trial neural activity is modulated by a shared, low-dimensional, stochastic signal that introduces task-irrelevant noise.

Decision Making Decoder

A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks

no code implementations5 Jul 2019 Owen Marschall, Kyunghyun Cho, Cristina Savin

We present a framework for compactly summarizing many recent results in efficient and/or biologically plausible online training of recurrent neural networks (RNN).


Using local plasticity rules to train recurrent neural networks

no code implementations28 May 2019 Owen Marschall, Kyunghyun Cho, Cristina Savin

To learn useful dynamics on long time scales, neurons must use plasticity rules that account for long-term, circuit-wide effects of synaptic changes.

Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

no code implementations NeurIPS 2016 Travis Monk, Cristina Savin, Jörg Lücke

We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities.

Estimating Nonlinear Neural Response Functions using GP Priors and Kronecker Methods

no code implementations NeurIPS 2016 Cristina Savin, Gasper Tkacik

Jointly characterizing neural responses in terms of several external variables promises novel insights into circuit function, but remains computationally prohibitive in practice.


Spatio-temporal Representations of Uncertainty in Spiking Neural Networks

no code implementations NeurIPS 2014 Cristina Savin, Sophie Denève

It has been long argued that, because of inherent ambiguity and noise, the brain needs to represent uncertainty in the form of probability distributions.

Correlations strike back (again): the case of associative memory retrieval

no code implementations NeurIPS 2013 Cristina Savin, Peter Dayan, Mate Lengyel

It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population.


Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories

no code implementations NeurIPS 2011 Cristina Savin, Peter Dayan, Máté Lengyel

Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items.

Cannot find the paper you are looking for? You can Submit a new open access paper.