Search Results for author: Maneesh Sahani

Found 34 papers, 8 papers with code

A solution for the mean parametrization of the von Mises-Fisher distribution

1 code implementation10 Apr 2024 Marcel Nonnenmacher, Maneesh Sahani

The von Mises-Fisher distribution as an exponential family can be expressed in terms of either its natural or its mean parameters.

Prediction under Latent Subgroup Shifts with High-Dimensional Observations

no code implementations23 Jun 2023 William I. Walker, Arthur Gretton, Maneesh Sahani

We introduce a new approach to prediction in graphical models with latent-shift adaptation, i. e., where source and target environments differ in the distribution of an unobserved confounding latent variable.

Successor-Predecessor Intrinsic Exploration

no code implementations NeurIPS 2023 Changmin Yu, Neil Burgess, Maneesh Sahani, Samuel J. Gershman

Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated intrinsic rewards.

Atari Games Efficient Exploration +1

A Unified Theory of Dual-Process Control

no code implementations13 Nov 2022 Ted Moskovitz, Kevin Miller, Maneesh Sahani, Matthew M. Botvinick

We apply a single model based on this observation to findings from research on executive control, reward-based learning, and judgment and decision making, showing that seemingly diverse dual-process phenomena can be understood as domain-specific consequences of a single underlying set of computational principles.

Decision Making

Unsupervised representation learning with recognition-parametrised probabilistic models

2 code implementations13 Sep 2022 William I. Walker, Hugo Soulat, Changmin Yu, Maneesh Sahani

We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model (RPM): a normalised semi-parametric hypothesis class for joint distributions over observed and latent variables.

Image Classification Representation Learning +1

Structured Recognition for Generative Models with Explaining Away

1 code implementation12 Sep 2022 Changmin Yu, Hugo Soulat, Neil Burgess, Maneesh Sahani

A key goal of unsupervised learning is to go beyond density estimation and sample generation to reveal the structure inherent within observed data.

Density Estimation Hippocampus +2

Minimum Description Length Control

no code implementations17 Jul 2022 Ted Moskovitz, Ta-Chu Kao, Maneesh Sahani, Matthew M. Botvinick

We propose a novel framework for multitask reinforcement learning based on the minimum description length (MDL) principle.

Bayesian Inference Continuous Control +2

Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning

no code implementations3 Dec 2021 Grace W. Lindsay, Josh Merel, Tom Mrsic-Flogel, Maneesh Sahani

Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input.

reinforcement-learning Reinforcement Learning (RL) +1

Probabilistic Tensor Decomposition of Neural Population Spiking Activity

1 code implementation NeurIPS 2021 Hugo Soulat, Sepiedeh Keshavarzi, Troy Margrie, Maneesh Sahani

The firing of neural populations is coordinated across cells, in time, and across experimentalconditions or repeated experimental trials; and so a full understanding of the computationalsignificance of neural responses must be based on a separation of these different contributions tostructured activity. Tensor decomposition is an approach to untangling the influence of multiple factors in data that iscommon in many fields.

Anatomy Tensor Decomposition +1

A First-Occupancy Representation for Reinforcement Learning

no code implementations ICLR 2022 Ted Moskovitz, Spencer R. Wilson, Maneesh Sahani

Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states.

reinforcement-learning Reinforcement Learning (RL)

Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

no code implementations NeurIPS 2020 Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin

Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility.

Gaussian Processes Model Selection +1

Amortised Learning by Wake-Sleep

no code implementations ICML 2020 Li K. Wenliang, Theodore Moskovitz, Heishiro Kanagawa, Maneesh Sahani

Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable.

A neurally plausible model for online recognition and postdiction in a dynamical environment

1 code implementation NeurIPS 2019 Li Kevin Wenliang, Maneesh Sahani

Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment.

A neurally plausible model learns successor representations in partially observable environments

1 code implementation NeurIPS 2019 Eszter Vertes, Maneesh Sahani

Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations.

reinforcement-learning Reinforcement Learning (RL)

Kernel Instrumental Variable Regression

1 code implementation NeurIPS 2019 Rahul Singh, Maneesh Sahani, Arthur Gretton

Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data.

regression

Learning interpretable continuous-time models of latent stochastic dynamical systems

no code implementations12 Feb 2019 Lea Duncker, Gergo Bohner, Julien Boussard, Maneesh Sahani

We develop an approach to learn an interpretable semi-parametric model of a latent continuous-time stochastic dynamical system, assuming noisy high-dimensional outputs sampled at uneven times.

Temporal alignment and latent Gaussian process factor inference in population spike trains

no code implementations NeurIPS 2018 Lea Duncker, Maneesh Sahani

We introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation.

Gaussian Processes

Empirical fixed point bifurcation analysis

1 code implementation4 Jul 2018 Gergo Bohner, Maneesh Sahani

In a common experimental setting, the behaviour of a noisy dynamical system is monitored in response to manipulations of one or more control parameters.

Flexible and accurate inference and learning for deep generative models

no code implementations NeurIPS 2018 Eszter Vertes, Maneesh Sahani

We introduce a new approach to learning in hierarchical latent-variable generative models called the "distributed distributional code Helmholtz machine", which emphasises flexibility and accuracy in the inferential process.

A Universal Marginalizer for Amortized Inference in Generative Models

no code implementations2 Nov 2017 Laura Douglas, Iliyan Zarov, Konstantinos Gourgoulias, Chris Lucas, Chris Hart, Adam Baker, Maneesh Sahani, Yura Perov, Saurabh Johri

We consider the problem of inference in a causal generative model where the set of available observations differs between data instances.

Bayesian Manifold Learning: The Locally Linear Latent Variable Model (LL-LVM)

no code implementations NeurIPS 2015 Mijung Park, Wittawat Jitkrittum, Ahmad Qamar, Zoltan Szabo, Lars Buesing, Maneesh Sahani

We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships.

Extracting regions of interest from biological images with convolutional sparse block coding

no code implementations NeurIPS 2013 Marius Pachitariu, Adam M. Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, Maneesh Sahani

We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives.

Recurrent linear models of simultaneously-recorded neural populations

no code implementations NeurIPS 2013 Marius Pachitariu, Biljana Petreska, Maneesh Sahani

We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations.

Regularization and nonlinearities for neural language models: when are they needed?

no code implementations23 Jan 2013 Marius Pachitariu, Maneesh Sahani

We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60. 8%.

Sentence Sentence Completion

Spectral learning of linear dynamics from generalised-linear observations with application to neural population data

no code implementations NeurIPS 2012 Lars Buesing, Jakob H. Macke, Maneesh Sahani

Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models.

Learning visual motion in recurrent neural networks

no code implementations NeurIPS 2012 Marius Pachitariu, Maneesh Sahani

We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables.

Probabilistic amplitude and frequency demodulation

no code implementations NeurIPS 2011 Richard Turner, Maneesh Sahani

A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time.

Dynamical segmentation of single trials from population neural data

no code implementations NeurIPS 2011 Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani

Simultaneous recordings of many neurons embedded within a recurrently-connected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function.

Occlusive Components Analysis

no code implementations NeurIPS 2009 Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.

Object

On Sparsity and Overcompleteness in Image Models

no code implementations NeurIPS 2007 Pietro Berkes, Richard Turner, Maneesh Sahani

Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.