Search Results for author: Maneesh Sahani

Found 27 papers, 5 papers with code

Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning

no code implementations3 Dec 2021 Grace W. Lindsay, Josh Merel, Tom Mrsic-Flogel, Maneesh Sahani

Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input.

reinforcement-learning Transfer Learning

Probabilistic Tensor Decomposition of Neural Population Spiking Activity

1 code implementation NeurIPS 2021 Hugo Soulat, Sepiedeh Keshavarzi, Troy Margrie, Maneesh Sahani

The firing of neural populations is coordinated across cells, in time, and across experimentalconditions or repeated experimental trials; and so a full understanding of the computationalsignificance of neural responses must be based on a separation of these different contributions tostructured activity. Tensor decomposition is an approach to untangling the influence of multiple factors in data that iscommon in many fields.

Tensor Decomposition Variational Inference

A First-Occupancy Representation for Reinforcement Learning

no code implementations ICLR 2022 Ted Moskovitz, Spencer R. Wilson, Maneesh Sahani

Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states.


Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

no code implementations NeurIPS 2020 Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin

Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility.

Gaussian Processes Model Selection

Amortised Learning by Wake-Sleep

no code implementations ICML 2020 Li K. Wenliang, Theodore Moskovitz, Heishiro Kanagawa, Maneesh Sahani

Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable.

A neurally plausible model for online recognition and postdiction in a dynamical environment

1 code implementation NeurIPS 2019 Li Kevin Wenliang, Maneesh Sahani

Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment.

A neurally plausible model learns successor representations in partially observable environments

1 code implementation NeurIPS 2019 Eszter Vertes, Maneesh Sahani

Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations.


Kernel Instrumental Variable Regression

1 code implementation NeurIPS 2019 Rahul Singh, Maneesh Sahani, Arthur Gretton

Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data.

Learning interpretable continuous-time models of latent stochastic dynamical systems

no code implementations12 Feb 2019 Lea Duncker, Gergo Bohner, Julien Boussard, Maneesh Sahani

We develop an approach to learn an interpretable semi-parametric model of a latent continuous-time stochastic dynamical system, assuming noisy high-dimensional outputs sampled at uneven times.

Temporal alignment and latent Gaussian process factor inference in population spike trains

no code implementations NeurIPS 2018 Lea Duncker, Maneesh Sahani

We introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation.

Gaussian Processes

Empirical fixed point bifurcation analysis

1 code implementation4 Jul 2018 Gergo Bohner, Maneesh Sahani

In a common experimental setting, the behaviour of a noisy dynamical system is monitored in response to manipulations of one or more control parameters.

Flexible and accurate inference and learning for deep generative models

no code implementations NeurIPS 2018 Eszter Vertes, Maneesh Sahani

We introduce a new approach to learning in hierarchical latent-variable generative models called the "distributed distributional code Helmholtz machine", which emphasises flexibility and accuracy in the inferential process.

A Universal Marginalizer for Amortized Inference in Generative Models

no code implementations2 Nov 2017 Laura Douglas, Iliyan Zarov, Konstantinos Gourgoulias, Chris Lucas, Chris Hart, Adam Baker, Maneesh Sahani, Yura Perov, Saurabh Johri

We consider the problem of inference in a causal generative model where the set of available observations differs between data instances.

Bayesian Manifold Learning: The Locally Linear Latent Variable Model (LL-LVM)

no code implementations NeurIPS 2015 Mijung Park, Wittawat Jitkrittum, Ahmad Qamar, Zoltan Szabo, Lars Buesing, Maneesh Sahani

We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships.

Extracting regions of interest from biological images with convolutional sparse block coding

no code implementations NeurIPS 2013 Marius Pachitariu, Adam M. Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, Maneesh Sahani

We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives.

Recurrent linear models of simultaneously-recorded neural populations

no code implementations NeurIPS 2013 Marius Pachitariu, Biljana Petreska, Maneesh Sahani

We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations.

Regularization and nonlinearities for neural language models: when are they needed?

no code implementations23 Jan 2013 Marius Pachitariu, Maneesh Sahani

We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60. 8%.

Sentence Completion

Spectral learning of linear dynamics from generalised-linear observations with application to neural population data

no code implementations NeurIPS 2012 Lars Buesing, Jakob H. Macke, Maneesh Sahani

Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through non-Gaussian noise models.

Learning visual motion in recurrent neural networks

no code implementations NeurIPS 2012 Marius Pachitariu, Maneesh Sahani

We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables.

Dynamical segmentation of single trials from population neural data

no code implementations NeurIPS 2011 Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani

Simultaneous recordings of many neurons embedded within a recurrently-connected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function.

Probabilistic amplitude and frequency demodulation

no code implementations NeurIPS 2011 Richard Turner, Maneesh Sahani

A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time.

Occlusive Components Analysis

no code implementations NeurIPS 2009 Jörg Lücke, Richard Turner, Maneesh Sahani, Marc Henniges

We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another.

On Sparsity and Overcompleteness in Image Models

no code implementations NeurIPS 2007 Pietro Berkes, Richard Turner, Maneesh Sahani

Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.