no code implementations • ICML 2020 • Stephen Keeley, David Zoltowski, Jonathan Pillow, Spencer Smith, Yiyi Yu
Gaussian Process Factor Analysis (GPFA) hasbeen broadly applied to the problem of identi-fying smooth, low-dimensional temporal struc-ture underlying large-scale neural recordings. However, spike trains are non-Gaussian, whichmotivates combining GPFA with discrete ob-servation models for binned spike count data. The drawback to this approach is that GPFApriors are not conjugate to count model like-lihoods, which makes inference challenging. Here we address this obstacle by introduc-ing a fast, approximate inference method fornon-conjugate GPFA models.
no code implementations • 9 Jun 2023 • Michael Shvartsman, Benjamin Letham, Stephen Keeley
Models for human choice prediction in preference learning and psychophysics often consider only binary response data, requiring many samples to accurately learn preferences or perceptual detection thresholds.
no code implementations • 2 Feb 2023 • Stephen Keeley, Benjamin Letham, Chase Tymms, Craig Sanders, Michael Shvartsman
Psychometric functions typically characterize binary sensory decisions along a single stimulus dimension.
no code implementations • NeurIPS 2020 • Stephen Keeley, Mikio Aoi, Yiyi Yu, Spencer Smith, Jonathan W. Pillow
Here we address this shortcoming by proposing ``signal-noise'' Poisson-spiking Gaussian Process Factor Analysis (SNP-GPFA), a flexible latent variable model that resolves signal and noise latent structure in neural population spiking activity.
no code implementations • NeurIPS 2017 • Anqi Wu, Nicholas G. Roy, Stephen Keeley, Jonathan W. Pillow
We apply the model to spike trains recorded from hippocampal place cells and show that it compares favorably to a variety of previous methods for latent structure discovery, including variational auto-encoder (VAE) based methods that parametrize the nonlinear mapping from latent space to spike rates with a deep neural network.