1 code implementation • 10 Apr 2024 • Marcel Nonnenmacher, Maneesh Sahani
The von Mises-Fisher distribution as an exponential family can be expressed in terms of either its natural or its mean parameters.
no code implementations • 23 Jun 2023 • William I. Walker, Arthur Gretton, Maneesh Sahani
We introduce a new approach to prediction in graphical models with latent-shift adaptation, i. e., where source and target environments differ in the distribution of an unobserved confounding latent variable.
no code implementations • NeurIPS 2023 • Changmin Yu, Neil Burgess, Maneesh Sahani, Samuel J. Gershman
Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards.
no code implementations • 13 Nov 2022 • Ted Moskovitz, Kevin Miller, Maneesh Sahani, Matthew M. Botvinick
We apply a single model based on this observation to findings from research on executive control, reward-based learning, and judgment and decision making, showing that seemingly diverse dual-process phenomena can be understood as domain-specific consequences of a single underlying set of computational principles.
2 code implementations • 13 Sep 2022 • William I. Walker, Hugo Soulat, Changmin Yu, Maneesh Sahani
We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model (RPM): a normalised semi-parametric hypothesis class for joint distributions over observed and latent variables.
1 code implementation • 12 Sep 2022 • Changmin Yu, Hugo Soulat, Neil Burgess, Maneesh Sahani
A key goal of unsupervised learning is to go beyond density estimation and sample generation to reveal the structure inherent within observed data.
no code implementations • 17 Jul 2022 • Ted Moskovitz, Ta-Chu Kao, Maneesh Sahani, Matthew M. Botvinick
We propose a novel framework for multitask reinforcement learning based on the minimum description length (MDL) principle.
no code implementations • 3 Dec 2021 • Grace W. Lindsay, Josh Merel, Tom Mrsic-Flogel, Maneesh Sahani
Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input.
1 code implementation • NeurIPS 2021 • Hugo Soulat, Sepiedeh Keshavarzi, Troy Margrie, Maneesh Sahani
The firing of neural populations is coordinated across cells, in time, and across experimentalconditions or repeated experimental trials; and so a full understanding of the computationalsignificance of neural responses must be based on a separation of these different contributions tostructured activity. Tensor decomposition is an approach to untangling the influence of multiple factors in data that iscommon in many fields.
no code implementations • ICLR 2022 • Ted Moskovitz, Spencer R. Wilson, Maneesh Sahani
Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states.
no code implementations • NeurIPS 2020 • Lea Duncker, Laura Driscoll, Krishna V. Shenoy, Maneesh Sahani, David Sussillo
Here, we develop a novel learning rule designed to minimize interference between sequentially learned tasks in recurrent networks.
no code implementations • NeurIPS 2020 • Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin
Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility.
no code implementations • ICML 2020 • Li K. Wenliang, Theodore Moskovitz, Heishiro Kanagawa, Maneesh Sahani
Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable.
1 code implementation • NeurIPS 2019 • Li Kevin Wenliang, Maneesh Sahani
Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment.
1 code implementation • NeurIPS 2019 • Eszter Vertes, Maneesh Sahani
Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations.
1 code implementation • NeurIPS 2019 • Rahul Singh, Maneesh Sahani, Arthur Gretton
Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data.
no code implementations • 12 Feb 2019 • Lea Duncker, Gergo Bohner, Julien Boussard, Maneesh Sahani
We develop an approach to learn an interpretable semi-parametric model of a latent continuous-time stochastic dynamical system, assuming noisy high-dimensional outputs sampled at uneven times.
no code implementations • NeurIPS 2018 • Lea Duncker, Maneesh Sahani
We introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation.
1 code implementation • 4 Jul 2018 • Gergo Bohner, Maneesh Sahani
In a common experimental setting, the behaviour of a noisy dynamical system is monitored in response to manipulations of one or more control parameters.
no code implementations • NeurIPS 2018 • Eszter Vertes, Maneesh Sahani
We introduce a new approach to learning in hierarchical latent-variable generative models called the "distributed distributional code Helmholtz machine", which emphasises flexibility and accuracy in the inferential process.
no code implementations • 2 Nov 2017 • Laura Douglas, Iliyan Zarov, Konstantinos Gourgoulias, Chris Lucas, Chris Hart, Adam Baker, Maneesh Sahani, Yura Perov, Saurabh Johri
We consider the problem of inference in a causal generative model where the set of available observations differs between data instances.
no code implementations • NeurIPS 2015 • Mijung Park, Wittawat Jitkrittum, Ahmad Qamar, Zoltan Szabo, Lars Buesing, Maneesh Sahani
We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships.
no code implementations • NeurIPS 2013 • Marius Pachitariu, Adam M. Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, Maneesh Sahani
We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives.
no code implementations • NeurIPS 2013 • Marius Pachitariu, Biljana Petreska, Maneesh Sahani
We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations.
no code implementations • 23 Jan 2013 • Marius Pachitariu, Maneesh Sahani
We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60. 8%.
no code implementations • NeurIPS 2012 • Lars Buesing, Jakob H. Macke, Maneesh Sahani
Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models.
no code implementations • NeurIPS 2012 • Marius Pachitariu, Maneesh Sahani
We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables.
no code implementations • NeurIPS 2011 • Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani
Neurons in the neocortex code and compute as part of a locally interconnected population.
no code implementations • NeurIPS 2011 • Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani
Simultaneous recordings of many neurons embedded within a recurrently-connected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function.
no code implementations • NeurIPS 2011 • Richard Turner, Maneesh Sahani
A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time.
no code implementations • NeurIPS 2009 • Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges
We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.
no code implementations • NeurIPS 2008 • Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani
We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution.
no code implementations • NeurIPS 2007 • Pietro Berkes, Richard Turner, Maneesh Sahani
Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention.
no code implementations • NeurIPS 2007 • Richard Turner, Maneesh Sahani
Natural sounds are structured on many time-scales.