Search Results for author: Francesco D'Angelo

Found 9 papers, 7 papers with code

Why Do We Need Weight Decay in Modern Deep Learning?

1 code implementation6 Oct 2023 Maksym Andriushchenko, Francesco D'Angelo, Aditya Varre, Nicolas Flammarion

In this work, we highlight that the role of weight decay in modern deep learning is different from its regularization effect studied in classical learning theory.

Learning Theory Stochastic Optimization

Uncertainty estimation under model misspecification in neural network regression

1 code implementation23 Nov 2021 Maria R. Cervera, Rafael Dätwyler, Francesco D'Angelo, Hamza Keurti, Benjamin F. Grewe, Christian Henning

Although neural networks are powerful function approximators, the underlying modelling assumptions ultimately define the likelihood and thus the hypothesis class they are parameterizing.

Decision Making regression

On out-of-distribution detection with Bayesian neural networks

1 code implementation12 Oct 2021 Francesco D'Angelo, Christian Henning

In this paper, we question this assumption and show that proper Bayesian inference with function space priors induced by neural networks does not necessarily lead to good OOD detection.

Bayesian Inference Gaussian Processes +2

Repulsive Deep Ensembles are Bayesian

1 code implementation NeurIPS 2021 Francesco D'Angelo, Vincent Fortuin

Deep ensembles have recently gained popularity in the deep learning community for their conceptual simplicity and efficiency.

Bayesian Inference

On Stein Variational Neural Network Ensembles

no code implementations20 Jun 2021 Francesco D'Angelo, Vincent Fortuin, Florian Wenzel

Ensembles of deep neural networks have achieved great success recently, but they do not offer a proper Bayesian justification.

Posterior Meta-Replay for Continual Learning

3 code implementations NeurIPS 2021 Christian Henning, Maria R. Cervera, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento

We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay.

Continual Learning

Annealed Stein Variational Gradient Descent

no code implementations pproximateinference AABI Symposium 2021 Francesco D'Angelo, Vincent Fortuin

Particle based optimization algorithms have recently been developed as sampling methods that iteratively update a set of particles to approximate a target distribution.

Learning the Ising Model with Generative Neural Networks

1 code implementation15 Jan 2020 Francesco D'Angelo, Lucas Böttcher

We also find that convolutional layers in VAEs are important to model spin correlations whereas RBMs achieve similar or even better performances without convolutional filters.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.