1 code implementation • NeurIPS 2020 • Kristopher T. Jensen, Ta-Chu Kao, Marco Tripodi, Guillaume Hennequin
A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations.
1 code implementation • NeurIPS 2021 • Ta-Chu Kao, Kristopher T. Jensen, Gido M. van de Ven, Alberto Bernacchia, Guillaume Hennequin
In contrast, artificial agents are prone to 'catastrophic forgetting' whereby performance on previous tasks deteriorates rapidly as new ones are acquired.
1 code implementation • 23 Nov 2020 • Ta-Chu Kao, Guillaume Hennequin
Sylvester, Lyapunov, and algebraic Riccati equations are the bread and butter of control theorists.
no code implementations • NeurIPS 2018 • Alberto Bernacchia, Mate Lengyel, Guillaume Hennequin
Stochastic gradient descent (SGD) remains the method of choice for deep learning, despite the limitations arising for ill-behaved objective functions.
no code implementations • NeurIPS 2014 • Dylan Festa, Guillaume Hennequin, Mate Lengyel
The persistent and graded activity often observed in cortical circuits is sometimes seen as a signature of autoassociative retrieval of memories stored earlier in synaptic efficacies.
no code implementations • NeurIPS 2014 • Guillaume Hennequin, Laurence Aitchison, Mate Lengyel
Multiple lines of evidence support the notion that the brain performs probabilistic inference in multiple cognitive domains, including perception and decision making.
no code implementations • NeurIPS 2009 • Henning Sprekeler, Guillaume Hennequin, Wulfram Gerstner
Here, we show that different learning rules emerge from a policy gradient approach depending on which features of the spike trains are assumed to influence the reward signals, i. e., depending on which neural code is in effect.
no code implementations • NeurIPS 2020 • Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin
Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility.
no code implementations • ICLR 2022 • Marine Schimel, Ta-Chu Kao, Kristopher T Jensen, Guillaume Hennequin
To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding.
no code implementations • NeurIPS 2021 • Kristopher Jensen, Ta-Chu Kao, Jasmine Stone, Guillaume Hennequin
We apply bGPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task.