no code implementations • ECCV 2020 • Daniel R. Kepple, Daewon Lee, Colin Prepsius, Volkan Isler, Il Memming Park, Daniel D. Lee
In the task of recovering pan-tilt ego velocities from events, we show that each individual confident local prediction of our network can be expected to be as accurate as state of the art optimization approaches which utilize the full image.
no code implementations • 3 Mar 2024 • Matthew Dowling, Yuan Zhao, Il Memming Park
We introduce an amortized variational inference algorithm and structured variational approximation for state-space models with nonlinear dynamics driven by Gaussian noise.
no code implementations • 24 Aug 2023 • Il Memming Park, Ábel Ságodi, Piotr Aleksander Sokół
Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory.
no code implementations • 1 Jun 2023 • Matthew Dowling, Yuan Zhao, Il Memming Park
In this work, we propose cvHM, a general inference framework for latent GP models leveraging Hida-Mat\'ern kernels and conjugate computation variational inference (CVI).
no code implementations • 18 May 2023 • Matthew Dowling, Yuan Zhao, Il Memming Park
Latent variable models have become instrumental in computational neuroscience for reasoning about neural computation.
1 code implementation • 3 Mar 2023 • Iris R. Stone, Yotam Sagiv, Il Memming Park, Jonathan W. Pillow
Latent linear dynamical systems with Bernoulli observations provide a powerful modeling framework for identifying the temporal dynamics underlying binary time series data, which arise in a variety of contexts such as binary decision-making and discrete stochastic processes (e. g., binned neural spike trains).
no code implementations • 6 Oct 2021 • Braden A. W. Brinkman, Han Yan, Arianna Maffei, Il Memming Park, Alfredo Fontanini, Jin Wang, Giancarlo La Camera
In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits.
no code implementations • 29 Sep 2021 • Ian D Jordan, Piotr A Sokol, Il Memming Park
We explain the learned mechanisms by which this network holds memory and extracts information from memory, and how gating is a natural architectural component to achieve these structures.
1 code implementation • 9 Sep 2021 • Felix Pei, Joel Ye, David Zoltowski, Anqi Wu, Raeed H. Chowdhury, Hansem Sohn, Joseph E. O'Doherty, Krishna V. Shenoy, Matthew T. Kaufman, Mark Churchland, Mehrdad Jazayeri, Lee E. Miller, Jonathan Pillow, Il Memming Park, Eva L. Dyer, Chethan Pandarinath
We curate four datasets of neural spiking activity from cognitive, sensory, and motor areas to promote models that apply to the wide variety of activity seen across these areas.
no code implementations • 15 Jul 2021 • Matthew Dowling, Piotr Sokół, Il Memming Park
We present the class of Hida-Mat\'ern kernels, which is the canonical family of covariance functions over the entire space of stationary Gauss-Markov Processes.
1 code implementation • NeurIPS 2020 • Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park
In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks.
1 code implementation • NeurIPS 2020 • Diego M. Arribas, Yuan Zhao, Il Memming Park
The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction.
no code implementations • 2 Sep 2020 • Matthew Dowling, Yuan Zhao, Il Memming Park
However, obtaining a satisfactory fit often requires burdensome model selection and fine tuning the form of the basis functions and their temporal span.
1 code implementation • 4 Jun 2019 • Yuan Zhao, Josue Nassar, Ian Jordan, Mónica Bugallo, Il Memming Park
Nonlinear state-space models are powerful tools to describe dynamical structures in complex time series.
no code implementations • 3 Jun 2019 • Ian D. Jordan, Piotr Aleksander Sokol, Il Memming Park
As a result, it is both difficult to know a priori how successful a GRU network will perform on a given task, and also their capacity to mimic the underlying behavior of their biological counterparts.
no code implementations • ICLR 2019 • Ian D. Jordan, Piotr Aleksander Sokol, Il Memming Park
Gated recurrent units (GRUs) were inspired by the common gated recurrent unit, long short-term memory (LSTM), as a means of capturing temporal structure with less complex memory unit architecture.
1 code implementation • ICLR 2019 • Josue Nassar, Scott W. Linderman, Monica Bugallo, Il Memming Park
Many real-world systems studied are governed by complex, nonlinear dynamics.
no code implementations • ICLR 2020 • Piotr A. Sokol, Il Memming Park
Recently mean field theory has been successfully used to analyze properties of wide, random neural networks.
1 code implementation • 27 Jul 2017 • Yuan Zhao, Il Memming Park
It brings the challenge of learning both latent neural state and the underlying dynamical system because neither is known for neural systems a priori.
1 code implementation • 11 Apr 2016 • Yuan Zhao, Il Memming Park
In the V1 dataset, we find that vLGP achieves substantially higher performance than previous methods for predicting omitted spike trains, as well as capturing both the toroidal topology of visual stimuli space, and the noise-correlation.
no code implementations • NeurIPS 2015 • Anqi Wu, Il Memming Park, Jonathan W. Pillow
Subunit models provide a powerful yet parsimonious description of neural spike responses to complex stimuli.
no code implementations • 23 Nov 2015 • Evan Archer, Il Memming Park, Lars Buesing, John Cunningham, Liam Paninski
These models have the advantage of learning latent structure both from noisy observations and from the temporal ordering in the data, where it is assumed that meaningful correlation structure exists across time.
no code implementations • NeurIPS 2013 • Il Memming Park, Evan W. Archer, Kenneth Latimer, Jonathan W. Pillow
We also establish a condition for equivalence between the cascade-logistic and the 2nd-order maxent or "Ising'' model, making cascade-logistic a reasonable choice for base measure in a universal model.
1 code implementation • NeurIPS 2013 • Evan W. Archer, Il Memming Park, Jonathan W. Pillow
Shannon's entropy is a basic quantity in information theory, and a fundamental building block for the analysis of neural codes.
no code implementations • NeurIPS 2013 • Il Memming Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow
The quadratic form characterizes the neuron's stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range.
no code implementations • 20 Oct 2013 • Il Memming Park, Sohan Seth, Steven Van Vaerenbergh
The kernel least mean squares (KLMS) algorithm is a computationally efficient nonlinear adaptive filtering method that "kernelizes" the celebrated (linear) least mean squares algorithm.
2 code implementations • 2 Feb 2013 • Evan Archer, Il Memming Park, Jonathan Pillow
The Pitman-Yor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably infinite discrete distributions, and has found major applications in Bayesian non-parametric statistics and machine learning.
Information Theory Information Theory
no code implementations • NeurIPS 2012 • Evan Archer, Il Memming Park, Jonathan W. Pillow
We consider the problem of estimating Shannon's entropy H in the under-sampled regime, where the number of possible symbols may be unknown or countably infinite.
no code implementations • NeurIPS 2011 • Il Memming Park, Jonathan W. Pillow
We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference.