Search Results for author: Benjamin F. Grewe

Found 14 papers, 11 papers with code

Bio-Inspired, Task-Free Continual Learning through Activity Regularization

no code implementations8 Dec 2022 Francesco Lässig, Pau Vilimelis Aceituno, Martino Sorbaro, Benjamin F. Grewe

We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation.

Continual Learning Split-MNIST

Meta-Learning via Classifier(-free) Diffusion Guidance

1 code implementation17 Oct 2022 Elvis Nava, Seijin Kobayashi, Yifei Yin, Robert K. Katzschmann, Benjamin F. Grewe

Our methods repurpose the popular generative image synthesis techniques of natural language guidance and diffusion models to generate neural network weights adapted for tasks.

Few-Shot Learning Image Generation +2

Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions

1 code implementation25 Jul 2022 Hamza Keurti, Hsiao-Ru Pan, Michel Besserve, Benjamin F. Grewe, Bernhard Schölkopf

How can we acquire world models that veridically represent the outside world both in terms of what is there and in terms of how our actions affect it?

Representation Learning Trajectory Prediction

A Theory of Natural Intelligence

no code implementations22 Apr 2022 Christoph von der Malsburg, Thilo Stadelmann, Benjamin F. Grewe

Introduction: In contrast to current AI technology, natural intelligence -- the kind of autonomous intelligence that is realized in the brains of animals and humans to attain in their natural environment goals defined by a repertoire of innate behavioral schemata -- is far superior in terms of learning speed, generalization capabilities, autonomy and creativity.

Inductive Bias

Minimizing Control for Credit Assignment with Strong Feedback

2 code implementations14 Apr 2022 Alexander Meulemans, Matilde Tristany Farinha, Maria R. Cervera, João Sacramento, Benjamin F. Grewe

Building upon deep feedback control (DFC), a recently proposed credit assignment method, we combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.

Fast Aquatic Swimmer Optimization with Differentiable Projective Dynamics and Neural Network Hydrodynamic Models

no code implementations30 Mar 2022 Elvis Nava, John Z. Zhang, Mike Y. Michelis, Tao Du, Pingchuan Ma, Benjamin F. Grewe, Wojciech Matusik, Robert K. Katzschmann

For the deformable solid simulation of the swimmer's body, we use state-of-the-art techniques from the field of computer graphics to speed up the finite-element method (FEM).

Uncertainty estimation under model misspecification in neural network regression

1 code implementation23 Nov 2021 Maria R. Cervera, Rafael Dätwyler, Francesco D'Angelo, Hamza Keurti, Benjamin F. Grewe, Christian Henning

Although neural networks are powerful function approximators, the underlying modelling assumptions ultimately define the likelihood and thus the hypothesis class they are parameterizing.

Decision Making regression

Credit Assignment in Neural Networks through Deep Feedback Control

3 code implementations NeurIPS 2021 Alexander Meulemans, Matilde Tristany Farinha, Javier García Ordóñez, Pau Vilimelis Aceituno, João Sacramento, Benjamin F. Grewe

The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output.

Posterior Meta-Replay for Continual Learning

3 code implementations NeurIPS 2021 Christian Henning, Maria R. Cervera, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento

We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay.

Continual Learning

Neural networks with late-phase weights

2 code implementations ICLR 2021 Johannes von Oswald, Seijin Kobayashi, Alexander Meulemans, Christian Henning, Benjamin F. Grewe, João Sacramento

The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD).

Image Classification

A Theoretical Framework for Target Propagation

2 code implementations NeurIPS 2020 Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe

Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.