Search Results for author: Jonathan W. Pillow

Found 41 papers, 9 papers with code

Characterizing neural dependencies with copula models

no code implementations NeurIPS 2008 Pietro Berkes, Frank Wood, Jonathan W. Pillow

The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses.

Time-rescaling methods for the estimation and assessment of non-Poisson neural encoding models

no code implementations NeurIPS 2009 Jonathan W. Pillow

Recent work on the statistical modeling of neural responses has focused on modulated renewal processes in which the spike rate is a function of the stimulus and recent spiking history.

Bayesian Spike-Triggered Covariance Analysis

no code implementations NeurIPS 2011 Il Memming Park, Jonathan W. Pillow

We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference.

Active learning of neural response functions with Gaussian processes

no code implementations NeurIPS 2011 Mijung Park, Greg Horwitz, Jonathan W. Pillow

With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate this nonlinear combination rule.

Active Learning Experimental Design +1

Bayesian active learning with localized priors for fast receptive field characterization

no code implementations NeurIPS 2012 Mijung Park, Jonathan W. Pillow

Active learning can substantially improve the yield of neurophysiology experiments by adaptively selecting stimuli to probe a neuron's receptive field (RF) in real time.

Active Learning

Fully Bayesian inference for neural models with negative-binomial spiking

no code implementations NeurIPS 2012 James Scott, Jonathan W. Pillow

Characterizing the information carried by neural populations in the brain requires accurate statistical models of neural spike responses.

Bayesian Inference Data Augmentation +1

Bayesian estimation of discrete entropy with mixtures of stick-breaking priors

no code implementations NeurIPS 2012 Evan Archer, Il Memming Park, Jonathan W. Pillow

We consider the problem of estimating Shannon's entropy H in the under-sampled regime, where the number of possible symbols may be unknown or countably infinite.

Spectral methods for neural characterization using generalized quadratic models

no code implementations NeurIPS 2013 Il Memming Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

The quadratic form characterizes the neuron's stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range.

Spike train entropy-rate estimation using hierarchical Dirichlet process priors

no code implementations NeurIPS 2013 Karin C. Knudson, Jonathan W. Pillow

We present both a fully Bayesian and empirical Bayes entropy rate estimator based on this model, and demonstrate their performance on simulated and real neural spike train data.

Universal models for binary spike patterns using centered Dirichlet processes

no code implementations NeurIPS 2013 Il Memming Park, Evan W. Archer, Kenneth Latimer, Jonathan W. Pillow

We also establish a condition for equivalence between the cascade-logistic and the 2nd-order maxent or "Ising'' model, making cascade-logistic a reasonable choice for base measure in a universal model.

Bayesian inference for low rank spatiotemporal neural receptive fields

no code implementations NeurIPS 2013 Mijung Park, Jonathan W. Pillow

In typical experiments with naturalistic or flickering spatiotemporal stimuli, RFs are very high-dimensional, due to the large number of coefficients needed to specify an integration profile across time and space.

Bayesian Inference Computational Efficiency

Bayesian entropy estimation for binary spike train data using parametric prior knowledge

1 code implementation NeurIPS 2013 Evan W. Archer, Il Memming Park, Jonathan W. Pillow

Shannon's entropy is a basic quantity in information theory, and a fundamental building block for the analysis of neural codes.

Inferring synaptic conductances from spike trains with a biophysically inspired point process model

no code implementations NeurIPS 2014 Kenneth W. Latimer, E.J. Chichilnisky, Fred Rieke, Jonathan W. Pillow

We show that the model fit to extracellular spike trains can predict excitatory and inhibitory conductances elicited by novel stimuli with nearly the same accuracy as a model trained directly with intracellular conductances.

Optimal prior-dependent neural population codes under shared input noise

no code implementations NeurIPS 2014 Agnieszka Grabska-Barwinska, Jonathan W. Pillow

The brain uses population codes to form distributed, noise-tolerant representations of sensory and motor variables.

Inferring sparse representations of continuous signals with continuous orthogonal matching pursuit

no code implementations NeurIPS 2014 Karin C. Knudson, Jacob Yates, Alexander Huk, Jonathan W. Pillow

Many signals, such as spike trains recorded in multi-channel electrophysiological recordings, may be represented as the sparse sum of translated and scaled copies of waveforms whose timing and amplitudes are of interest.

Low-dimensional models of neural population activity in sensory cortical circuits

no code implementations NeurIPS 2014 Evan W. Archer, Urs Koster, Jonathan W. Pillow, Jakob H. Macke

Moreover, because the nonlinear stimulus inputs are mixed by the ongoing dynamics, the model can account for a relatively large number of idiosyncratic receptive field shapes with a small number of nonlinear inputs to a low-dimensional latent dynamical model.

Sparse Bayesian structure learning with “dependent relevance determination” priors

no code implementations NeurIPS 2014 Anqi Wu, Mijung Park, Oluwasanmi O. Koyejo, Jonathan W. Pillow

Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), model parameters as independent a priori, and therefore do not exploit such dependencies.

regression

Convolutional spike-triggered covariance analysis for neural subunit models

no code implementations NeurIPS 2015 Anqi Wu, Il Memming Park, Jonathan W. Pillow

Subunit models provide a powerful yet parsimonious description of neural spike responses to complex stimuli.

Bayesian latent structure discovery from multi-neuron recordings

2 code implementations NeurIPS 2016 Scott W. Linderman, Ryan P. Adams, Jonathan W. Pillow

Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties.

Bayesian Inference Clustering +1

Adaptive optimal training of animal behavior

no code implementations NeurIPS 2016 Ji Hyun Bak, Jung Yoon Choi, Athena Akrami, Ilana Witten, Jonathan W. Pillow

We show that we can accurately infer the parameters of a policy-gradient-based learning algorithm that describes how the animal's internal model of the task evolves over the course of training.

Experimental Design reinforcement-learning +1

A Bayesian method for reducing bias in neural representational similarity analysis

1 code implementation NeurIPS 2016 Ming Bo Cai, Nicolas W. Schuck, Jonathan W. Pillow, Yael Niv

We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task.

Exploiting gradients and Hessians in Bayesian optimization and Bayesian quadrature

no code implementations31 Mar 2017 Anqi Wu, Mikio C. Aoi, Jonathan W. Pillow

An exciting branch of machine learning research focuses on methods for learning, optimizing, and integrating unknown functions that are difficult or costly to evaluate.

Bayesian Optimization Gaussian Processes

Dependent relevance determination for smooth and structured sparse regression

1 code implementation28 Nov 2017 Anqi Wu, Oluwasanmi Koyejo, Jonathan W. Pillow

Our approach represents a hierarchical extension of the relevance determination framework, where we add a transformed Gaussian process to model the dependencies between the prior variances of regression weights.

regression

Gaussian process based nonlinear latent structure discovery in multivariate spike train data

no code implementations NeurIPS 2017 Anqi Wu, Nicholas G. Roy, Stephen Keeley, Jonathan W. Pillow

We apply the model to spike trains recorded from hippocampal place cells and show that it compares favorably to a variety of previous methods for latent structure discovery, including variational auto-encoder (VAE) based methods that parametrize the nonlinear mapping from latent space to spike rates with a deep neural network.

Gaussian Processes

Shared Representational Geometry Across Neural Networks

2 code implementations28 Nov 2018 Qihong Lu, Po-Hsuan Chen, Jonathan W. Pillow, Peter J. Ramadge, Kenneth A. Norman, Uri Hasson

Different neural networks trained on the same dataset often learn similar input-output mappings with very different weights.

Power-law efficient neural codes provide general link between perceptual bias and discriminability

no code implementations NeurIPS 2018 Michael Morais, Jonathan W. Pillow

Specifically, we show that the same lawful relationship between bias and discriminability arises whenever Fisher information is allocated proportional to any power of the prior distribution.

Scaling the Poisson GLM to massive neural datasets through polynomial approximations

no code implementations NeurIPS 2018 David Zoltowski, Jonathan W. Pillow

We use the quadratic estimator to fit a fully-coupled Poisson GLM to spike train data recorded from 831 neurons across five regions of the mouse brain for a duration of 41 minutes, binned at 1 ms resolution.

Learning a latent manifold of odor representations from neural responses in piriform cortex

no code implementations NeurIPS 2018 Anqi Wu, Stan Pashkovski, Sandeep R. Datta, Jonathan W. Pillow

Our approach is based on the Gaussian process latent variable model, and seeks to map odorants to points in a low-dimensional embedding space, where distances between points in the embedding space relate to the similarity of population responses they elicit.

Model-based targeted dimensionality reduction for neuronal population data

no code implementations NeurIPS 2018 Mikio Aoi, Jonathan W. Pillow

Here we propose a new model-based method for targeted dimensionality reduction based on a probabilistic generative model of the population response data.

Dimensionality Reduction

Efficient inference for time-varying behavior during learning

no code implementations NeurIPS 2018 Nicholas G. Roy, Ji Hyun Bak, Athena Akrami, Carlos Brody, Jonathan W. Pillow

To overcome these limitations, we propose a dynamic psychophysical model that efficiently tracks trial-to-trial changes in behavior over the course of training.

Unifying and generalizing models of neural dynamics during decision-making

1 code implementation13 Jan 2020 David M. Zoltowski, Jonathan W. Pillow, Scott W. Linderman

An open question in systems and computational neuroscience is how neural circuits accumulate evidence towards a decision.

Decision Making Open-Ended Question Answering

High-contrast "gaudy" images improve the training of deep neural network models of visual cortex

no code implementations13 Jun 2020 Benjamin R. Cowley, Jonathan W. Pillow

A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model of responses from visual cortical neurons.

Active Learning

Identifying signal and noise structure in neural population activity with Gaussian process factor models

no code implementations NeurIPS 2020 Stephen Keeley, Mikio Aoi, Yiyi Yu, Spencer Smith, Jonathan W. Pillow

Here we address this shortcoming by proposing ``signal-noise'' Poisson-spiking Gaussian Process Factor Analysis (SNP-GPFA), a flexible latent variable model that resolves signal and noise latent structure in neural population spiking activity.

Variational Inference

High-contrast “gaudy” images improve the training of deep neural network models of visual cortex

1 code implementation NeurIPS 2020 Benjamin Cowley, Jonathan W. Pillow

We propose high-contrast, binarized versions of natural images---termed gaudy images---to efficiently train DNNs to predict higher-order visual cortical responses.

Active Learning

Inferring learning rules from animal decision-making

1 code implementation NeurIPS 2020 Zoe Ashwood, Nicholas A. Roy, Ji Hyun Bak, Jonathan W. Pillow

Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal’s policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules.

Decision Making

Loss-calibrated expectation propagation for approximate Bayesian decision-making

no code implementations10 Jan 2022 Michael J. Morais, Jonathan W. Pillow

Approximate Bayesian inference methods provide a powerful suite of tools for finding approximations to intractable posterior distributions.

Bayesian Inference Decision Making

Bayesian Active Learning for Discrete Latent Variable Models

no code implementations27 Feb 2022 Aditi Jha, Zoe C. Ashwood, Jonathan W. Pillow

We show that our method substantially reduces the amount of data needed to fit GLM-HMM, and outperforms a variety of approximate methods based on variational and amortized inference.

Active Learning Decision Making +1

Correcting motion induced fluorescence artifacts in two-channel neural imaging

no code implementations26 Apr 2022 Matthew S. Creamer, Kevin S. Chen, Andrew M. Leifer, Jonathan W. Pillow

Existing approaches for this correction, such as taking the ratio of the two channels, do not account for channel independent noise in the measured fluorescence.

Time Series Analysis Vocal Bursts Valence Prediction

Spectral learning of Bernoulli linear dynamical systems models

1 code implementation3 Mar 2023 Iris R. Stone, Yotam Sagiv, Il Memming Park, Jonathan W. Pillow

Latent linear dynamical systems with Bernoulli observations provide a powerful modeling framework for identifying the temporal dynamics underlying binary time series data, which arise in a variety of contexts such as binary decision-making and discrete stochastic processes (e. g., binned neural spike trains).

Decision Making Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.