Search Results for author: Walter Senn

Found 19 papers, 6 papers with code

Precision estimation and second-order prediction errors in cortical circuits

no code implementations27 Sep 2023 Arno Granier, Mihai A. Petrovici, Walter Senn, Katharina A. Wilmes

Minimization of cortical prediction errors is believed to be a key canonical computation of the cerebral cortex underlying perception, action and learning.

Learning beyond sensations: how dreams organize neuronal representations

no code implementations3 Aug 2023 Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, Jakob Jordan

However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs.

Contrastive Learning

Learning efficient backprojections across cortical hierarchies in real time

no code implementations20 Dec 2022 Kevin Max, Laura Kriener, Garibaldi Pineda García, Thomas Nowotny, Walter Senn, Mihai A. Petrovici

Models of sensory processing and learning in the cortex need to efficiently assign credit to synapses in all areas.

Learning cortical representations through perturbed and adversarial dreaming

1 code implementation9 Sep 2021 Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, Jakob Jordan

We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs).

Learning Semantic Representations

Conductance-based dendrites perform Bayes-optimal cue integration

no code implementations27 Apr 2021 Jakob Jordan, João Sacramento, Willem A. M. Wybo, Mihai A. Petrovici, Walter Senn

We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration.

Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming

no code implementations8 Feb 2021 Henrik D. Mettler, Maximilian Schmidt, Walter Senn, Mihai A. Petrovici, Jakob Jordan

We formulate the search for phenomenological models of synaptic plasticity as an optimization problem.

Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks

no code implementations15 Nov 2019 Thomas Mesnard, Gaetan Vignoud, Joao Sacramento, Walter Senn, Yoshua Bengio

This reduced system combines the essential elements to have a working biologically abstracted analogue of backpropagation with a simple formulation and proofs of the associated results.

Dendritic cortical microcircuits approximate the backpropagation algorithm

no code implementations NeurIPS 2018 João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn

Deep learning has seen remarkable developments over the last years, many of them inspired by neuroscience.

Stochasticity from function -- why the Bayesian brain may need no noise

no code implementations21 Sep 2018 Dominik Dold, Ilja Bytschok, Akos F. Kungl, Andreas Baumbach, Oliver Breitwieser, Walter Senn, Johannes Schemmel, Karlheinz Meier, Mihai A. Petrovici

An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing.

Bayesian Inference

Dendritic error backpropagation in deep cortical microcircuits

1 code implementation30 Dec 2017 João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn

Animal behaviour depends on learning to associate sensory stimuli with the desired motor command.


Spiking neurons with short-term synaptic plasticity form superior generative networks

no code implementations24 Sep 2017 Luziwei Leng, Roman Martel, Oliver Breitwieser, Ilja Bytschok, Walter Senn, Johannes Schemmel, Karlheinz Meier, Mihai A. Petrovici

In this work, we use networks of leaky integrate-and-fire neurons that are trained to perform both discriminative and generative tasks in their forward and backward information processing paths, respectively.

Feedforward Initialization for Fast Inference of Deep Generative Networks is biologically plausible

no code implementations6 Jun 2016 Yoshua Bengio, Benjamin Scellier, Olexa Bilaniuk, Joao Sacramento, Walter Senn

We find conditions under which a simple feedforward computation is a very good initialization for inference, after the input units are clamped to observed values.

Sequence learning with hidden units in spiking neural networks

no code implementations NeurIPS 2011 Johanni Brea, Walter Senn, Jean-Pascal Pfister

We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns.

Cannot find the paper you are looking for? You can Submit a new open access paper.