Search Results for author: Guillaume Hennequin

Found 10 papers, 3 papers with code

Code-specific policy gradient rules for spiking neurons

no code implementations NeurIPS 2009 Henning Sprekeler, Guillaume Hennequin, Wulfram Gerstner

Here, we show that different learning rules emerge from a policy gradient approach depending on which features of the spike trains are assumed to influence the reward signals, i. e., depending on which neural code is in effect.

Analog Memories in a Balanced Rate-Based Network of E-I Neurons

no code implementations NeurIPS 2014 Dylan Festa, Guillaume Hennequin, Mate Lengyel

The persistent and graded activity often observed in cortical circuits is sometimes seen as a signature of autoassociative retrieval of memories stored earlier in synaptic efficacies.

Retrieval

Fast Sampling-Based Inference in Balanced Neuronal Networks

no code implementations NeurIPS 2014 Guillaume Hennequin, Laurence Aitchison, Mate Lengyel

Multiple lines of evidence support the notion that the brain performs probabilistic inference in multiple cognitive domains, including perception and decision making.

Decision Making

Exact natural gradient in deep linear networks and its application to the nonlinear case

no code implementations NeurIPS 2018 Alberto Bernacchia, Mate Lengyel, Guillaume Hennequin

Stochastic gradient descent (SGD) remains the method of choice for deep learning, despite the limitations arising for ill-behaved objective functions.

Manifold GPLVMs for discovering non-Euclidean latent structure in neural data

1 code implementation NeurIPS 2020 Kristopher T. Jensen, Ta-Chu Kao, Marco Tripodi, Guillaume Hennequin

A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations.

Variational Inference

Automatic differentiation of Sylvester, Lyapunov, and algebraic Riccati equations

1 code implementation23 Nov 2020 Ta-Chu Kao, Guillaume Hennequin

Sylvester, Lyapunov, and algebraic Riccati equations are the bread and butter of control theorists.

Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

no code implementations NeurIPS 2020 Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin

Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility.

Gaussian Processes Model Selection +1

Natural continual learning: success is a journey, not (just) a destination

1 code implementation NeurIPS 2021 Ta-Chu Kao, Kristopher T. Jensen, Gido M. van de Ven, Alberto Bernacchia, Guillaume Hennequin

In contrast, artificial agents are prone to 'catastrophic forgetting' whereby performance on previous tasks deteriorates rapidly as new ones are acquired.

Continual Learning

iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data

no code implementations ICLR 2022 Marine Schimel, Ta-Chu Kao, Kristopher T Jensen, Guillaume Hennequin

To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding.

Model Optimization Variational Inference

Scalable Bayesian GPFA with automatic relevance determination and discrete noise models

no code implementations NeurIPS 2021 Kristopher Jensen, Ta-Chu Kao, Jasmine Stone, Guillaume Hennequin

We apply bGPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.