no code implementations • 15 Oct 2022 • Anthony Zador, Sean Escola, Blake Richards, Bence Ölveczky, Yoshua Bengio, Kwabena Boahen, Matthew Botvinick, Dmitri Chklovskii, Anne Churchland, Claudia Clopath, James DiCarlo, Surya Ganguli, Jeff Hawkins, Konrad Koerding, Alexei Koulakov, Yann Lecun, Timothy Lillicrap, Adam Marblestone, Bruno Olshausen, Alexandre Pouget, Cristina Savin, Terrence Sejnowski, Eero Simoncelli, Sara Solla, David Sussillo, Andreas S. Tolias, Doris Tsao
Neuroscience has long been an essential driver of progress in artificial intelligence (AI).
1 code implementation • 6 Sep 2022 • Edoardo Balzani, Jean Paul Noel, Pedro Herrero-Vidal, Dora E. Angelaki, Cristina Savin
Latent manifolds provide a compact characterization of neural population activity and of shared co-variability across brain areas.
1 code implementation • NeurIPS 2021 • Pedro Herrero-Vidal, Dmitry Rinberg, Cristina Savin
Identifying the common structure of neural dynamics across subjects is key for extracting unifying principles of brain computation and for many brain machine interface applications.
no code implementations • NeurIPS 2021 • Camille Rullán Buxó, Cristina Savin
Many features of human and animal behavior can be understood in the framework of Bayesian inference and optimal decision making, but the biological substrate of such processes is not fully understood.
1 code implementation • NeurIPS 2021 • Colin Bredenberg, Benjamin Lyo, Eero Simoncelli, Cristina Savin
Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience.
no code implementations • 12 May 2021 • Luke Y. Prince, Roy Henha Eyono, Ellen Boven, Arna Ghosh, Joe Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass, Blake A. Richards, Cristina Savin, Katharina Anna Wilmes
We provide a brief review of the common assumptions about biological learning with findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks.
1 code implementation • 15 Feb 2021 • Daniel Jiwoong Im, Cristina Savin, Kyunghyun Cho
Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning.
1 code implementation • NeurIPS 2020 • Edoardo Balzani, Kaushik Lakshminarasimhan, Dora Angelaki, Cristina Savin
Recent technological advances in systems neuroscience have led to a shift away from using simple tasks, with low-dimensional, well-controlled stimuli, towards trying to understand neural activity during naturalistic behavior.
1 code implementation • NeurIPS 2020 • Colin Bredenberg, Eero Simoncelli, Cristina Savin
Neural populations encode the sensory world imperfectly: their capacity is limited by the number of neurons, availability of metabolic and other biophysical resources, and intrinsic noise.
no code implementations • NeurIPS 2019 • Caroline Haimerl, Cristina Savin, Eero Simoncelli
It has been observed that trial-to-trial neural activity is modulated by a shared, low-dimensional, stochastic signal that introduces task-irrelevant noise.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Owen Marschall, Kyunghyun Cho, Cristina Savin
To which extent can successful machine learning inform our understanding of biological learning?
no code implementations • 5 Jul 2019 • Owen Marschall, Kyunghyun Cho, Cristina Savin
We present a framework for compactly summarizing many recent results in efficient and/or biologically plausible online training of recurrent neural networks (RNN).
no code implementations • 28 May 2019 • Owen Marschall, Kyunghyun Cho, Cristina Savin
To learn useful dynamics on long time scales, neurons must use plasticity rules that account for long-term, circuit-wide effects of synaptic changes.
no code implementations • NeurIPS 2016 • Travis Monk, Cristina Savin, Jörg Lücke
We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities.
no code implementations • NeurIPS 2016 • Cristina Savin, Gasper Tkacik
Jointly characterizing neural responses in terms of several external variables promises novel insights into circuit function, but remains computationally prohibitive in practice.
no code implementations • NeurIPS 2014 • Cristina Savin, Sophie Denève
It has been long argued that, because of inherent ambiguity and noise, the brain needs to represent uncertainty in the form of probability distributions.
no code implementations • NeurIPS 2013 • Cristina Savin, Peter Dayan, Mate Lengyel
It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population.
no code implementations • NeurIPS 2011 • Cristina Savin, Peter Dayan, Máté Lengyel
Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items.