no code implementations • NeurIPS 2020 • Lea Duncker, Laura Driscoll, Krishna V. Shenoy, Maneesh Sahani, David Sussillo
Here, we develop a novel learning rule designed to minimize interference between sequentially learned tasks in recurrent networks.
no code implementations • 12 Feb 2019 • Lea Duncker, Gergo Bohner, Julien Boussard, Maneesh Sahani
We develop an approach to learn an interpretable semi-parametric model of a latent continuous-time stochastic dynamical system, assuming noisy high-dimensional outputs sampled at uneven times.
no code implementations • NeurIPS 2018 • Lea Duncker, Maneesh Sahani
We introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation.