Search Results for author: Stephen Keeley

Found 5 papers, 0 papers with code

Efficient non-conjugate Gaussian process factor models for spike countdata using polynomial approximations

no code implementations ICML 2020 Stephen Keeley, David Zoltowski, Jonathan Pillow, Spencer Smith, Yiyi Yu

Gaussian Process Factor Analysis (GPFA) hasbeen broadly applied to the problem of identi-fying smooth, low-dimensional temporal struc-ture underlying large-scale neural recordings. However, spike trains are non-Gaussian, whichmotivates combining GPFA with discrete ob-servation models for binned spike count data. The drawback to this approach is that GPFApriors are not conjugate to count model like-lihoods, which makes inference challenging. Here we address this obstacle by introduc-ing a fast, approximate inference method fornon-conjugate GPFA models.

Variational Inference

Response Time Improves Choice Prediction and Function Estimation for Gaussian Process Models of Perception and Preferences

no code implementations9 Jun 2023 Michael Shvartsman, Benjamin Letham, Stephen Keeley

Models for human choice prediction in preference learning and psychophysics often consider only binary response data, requiring many samples to accurately learn preferences or perceptual detection thresholds.

Identifying signal and noise structure in neural population activity with Gaussian process factor models

no code implementations NeurIPS 2020 Stephen Keeley, Mikio Aoi, Yiyi Yu, Spencer Smith, Jonathan W. Pillow

Here we address this shortcoming by proposing ``signal-noise'' Poisson-spiking Gaussian Process Factor Analysis (SNP-GPFA), a flexible latent variable model that resolves signal and noise latent structure in neural population spiking activity.

Variational Inference

Gaussian process based nonlinear latent structure discovery in multivariate spike train data

no code implementations NeurIPS 2017 Anqi Wu, Nicholas G. Roy, Stephen Keeley, Jonathan W. Pillow

We apply the model to spike trains recorded from hippocampal place cells and show that it compares favorably to a variety of previous methods for latent structure discovery, including variational auto-encoder (VAE) based methods that parametrize the nonlinear mapping from latent space to spike rates with a deep neural network.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.