You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 26 Apr 2022 • Matthew S. Creamer, Kevin S. Chen, Andrew M. Leifer, Jonathan W. Pillow

Existing approaches for this correction, such as taking the ratio of the two channels, do not account for channel independent noise in the measured fluorescence.

no code implementations • 27 Feb 2022 • Aditi Jha, Zoe C. Ashwood, Jonathan W. Pillow

We then consider a powerful class of temporally structured latent variable models known as Input-Output Hidden Markov Models (IO-HMMs), which have recently gained prominence in neuroscience.

no code implementations • 10 Jan 2022 • Michael J. Morais, Jonathan W. Pillow

Approximate Bayesian inference methods provide a powerful suite of tools for finding approximations to intractable posterior distributions.

no code implementations • NeurIPS 2020 • Stephen Keeley, Mikio Aoi, Yiyi Yu, Spencer Smith, Jonathan W. Pillow

Here we address this shortcoming by proposing ``signal-noise'' Poisson-spiking Gaussian Process Factor Analysis (SNP-GPFA), a flexible latent variable model that resolves signal and noise latent structure in neural population spiking activity.

1 code implementation • NeurIPS 2020 • Benjamin Cowley, Jonathan W. Pillow

We propose high-contrast, binarized versions of natural images---termed gaudy images---to efficiently train DNNs to predict higher-order visual cortical responses.

1 code implementation • NeurIPS 2020 • Zoe Ashwood, Nicholas A. Roy, Ji Hyun Bak, Jonathan W. Pillow

Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal’s policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules.

no code implementations • 13 Jun 2020 • Benjamin R. Cowley, Jonathan W. Pillow

A key challenge in understanding the sensory transformations of the visual system is to obtain a highly predictive model of responses from visual cortical neurons.

1 code implementation • 13 Jan 2020 • David M. Zoltowski, Jonathan W. Pillow, Scott W. Linderman

An open question in systems and computational neuroscience is how neural circuits accumulate evidence towards a decision.

no code implementations • 7 Jun 2019 • Stephen L. Keeley, David M. Zoltowski, Yiyi Yu, Jacob L. Yates, Spencer L. Smith, Jonathan W. Pillow

We demonstrate that PAL estimators achieve fast and accurate extraction of latent structure from multi-neuron spike train data.

no code implementations • NeurIPS 2018 • Anqi Wu, Stan Pashkovski, Sandeep R. Datta, Jonathan W. Pillow

Our approach is based on the Gaussian process latent variable model, and seeks to map odorants to points in a low-dimensional embedding space, where distances between points in the embedding space relate to the similarity of population responses they elicit.

no code implementations • NeurIPS 2018 • Nicholas G. Roy, Ji Hyun Bak, Athena Akrami, Carlos Brody, Jonathan W. Pillow

To overcome these limitations, we propose a dynamic psychophysical model that efficiently tracks trial-to-trial changes in behavior over the course of training.

no code implementations • NeurIPS 2018 • Michael Morais, Jonathan W. Pillow

Specifically, we show that the same lawful relationship between bias and discriminability arises whenever Fisher information is allocated proportional to any power of the prior distribution.

no code implementations • NeurIPS 2018 • Mikio Aoi, Jonathan W. Pillow

Here we propose a new model-based method for targeted dimensionality reduction based on a probabilistic generative model of the population response data.

no code implementations • NeurIPS 2018 • David Zoltowski, Jonathan W. Pillow

We use the quadratic estimator to fit a fully-coupled Poisson GLM to spike train data recorded from 831 neurons across five regions of the mouse brain for a duration of 41 minutes, binned at 1 ms resolution.

2 code implementations • 28 Nov 2018 • Qihong Lu, Po-Hsuan Chen, Jonathan W. Pillow, Peter J. Ramadge, Kenneth A. Norman, Uri Hasson

Different neural networks trained on the same dataset often learn similar input-output mappings with very different weights.

no code implementations • NeurIPS 2017 • Anqi Wu, Nicholas G. Roy, Stephen Keeley, Jonathan W. Pillow

We apply the model to spike trains recorded from hippocampal place cells and show that it compares favorably to a variety of previous methods for latent structure discovery, including variational auto-encoder (VAE) based methods that parametrize the nonlinear mapping from latent space to spike rates with a deep neural network.

1 code implementation • 28 Nov 2017 • Anqi Wu, Oluwasanmi Koyejo, Jonathan W. Pillow

Our approach represents a hierarchical extension of the relevance determination framework, where we add a transformed Gaussian process to model the dependencies between the prior variances of regression weights.

no code implementations • 31 Mar 2017 • Anqi Wu, Mikio C. Aoi, Jonathan W. Pillow

An exciting branch of machine learning research focuses on methods for learning, optimizing, and integrating unknown functions that are difficult or costly to evaluate.

no code implementations • NeurIPS 2016 • Ji Hyun Bak, Jung Yoon Choi, Athena Akrami, Ilana Witten, Jonathan W. Pillow

We show that we can accurately infer the parameters of a policy-gradient-based learning algorithm that describes how the animal's internal model of the task evolves over the course of training.

1 code implementation • NeurIPS 2016 • Ming Bo Cai, Nicolas W. Schuck, Jonathan W. Pillow, Yael Niv

We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task.

2 code implementations • NeurIPS 2016 • Scott W. Linderman, Ryan P. Adams, Jonathan W. Pillow

Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties.

no code implementations • NeurIPS 2015 • Anqi Wu, Il Memming Park, Jonathan W. Pillow

Subunit models provide a powerful yet parsimonious description of neural spike responses to complex stimuli.

no code implementations • NeurIPS 2014 • Anqi Wu, Mijung Park, Oluwasanmi O. Koyejo, Jonathan W. Pillow

Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), model parameters as independent a priori, and therefore do not exploit such dependencies.

no code implementations • NeurIPS 2014 • Agnieszka Grabska-Barwinska, Jonathan W. Pillow

The brain uses population codes to form distributed, noise-tolerant representations of sensory and motor variables.

no code implementations • NeurIPS 2014 • Karin C. Knudson, Jacob Yates, Alexander Huk, Jonathan W. Pillow

Many signals, such as spike trains recorded in multi-channel electrophysiological recordings, may be represented as the sparse sum of translated and scaled copies of waveforms whose timing and amplitudes are of interest.

no code implementations • NeurIPS 2014 • Kenneth W. Latimer, E.J. Chichilnisky, Fred Rieke, Jonathan W. Pillow

We show that the model fit to extracellular spike trains can predict excitatory and inhibitory conductances elicited by novel stimuli with nearly the same accuracy as a model trained directly with intracellular conductances.

no code implementations • NeurIPS 2014 • Evan W. Archer, Urs Koster, Jonathan W. Pillow, Jakob H. Macke

Moreover, because the nonlinear stimulus inputs are mixed by the ongoing dynamics, the model can account for a relatively large number of idiosyncratic receptive field shapes with a small number of nonlinear inputs to a low-dimensional latent dynamical model.

no code implementations • NeurIPS 2013 • Il Memming Park, Evan W. Archer, Kenneth Latimer, Jonathan W. Pillow

We also establish a condition for equivalence between the cascade-logistic and the 2nd-order maxent or "Ising'' model, making cascade-logistic a reasonable choice for base measure in a universal model.

no code implementations • NeurIPS 2013 • Karin C. Knudson, Jonathan W. Pillow

We present both a fully Bayesian and empirical Bayes entropy rate estimator based on this model, and demonstrate their performance on simulated and real neural spike train data.

1 code implementation • NeurIPS 2013 • Evan W. Archer, Il Memming Park, Jonathan W. Pillow

Shannon's entropy is a basic quantity in information theory, and a fundamental building block for the analysis of neural codes.

no code implementations • NeurIPS 2013 • Il Memming Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

The quadratic form characterizes the neuron's stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range.

no code implementations • NeurIPS 2013 • Mijung Park, Jonathan W. Pillow

In typical experiments with naturalistic or flickering spatiotemporal stimuli, RFs are very high-dimensional, due to the large number of coefficients needed to specify an integration profile across time and space.

no code implementations • NeurIPS 2012 • James Scott, Jonathan W. Pillow

Characterizing the information carried by neural populations in the brain requires accurate statistical models of neural spike responses.

no code implementations • NeurIPS 2012 • Evan Archer, Il Memming Park, Jonathan W. Pillow

We consider the problem of estimating Shannon's entropy H in the under-sampled regime, where the number of possible symbols may be unknown or countably infinite.

no code implementations • NeurIPS 2012 • Mijung Park, Jonathan W. Pillow

Active learning can substantially improve the yield of neurophysiology experiments by adaptively selecting stimuli to probe a neuron's receptive field (RF) in real time.

no code implementations • NeurIPS 2011 • Mijung Park, Greg Horwitz, Jonathan W. Pillow

With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate this nonlinear combination rule.

no code implementations • NeurIPS 2011 • Il Memming Park, Jonathan W. Pillow

We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference.

no code implementations • NeurIPS 2009 • Jonathan W. Pillow

Recent work on the statistical modeling of neural responses has focused on modulated renewal processes in which the spike rate is a function of the stimulus and recent spiking history.

no code implementations • NeurIPS 2008 • Pietro Berkes, Frank Wood, Jonathan W. Pillow

The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.