Search Results for author: Vincent Fortuin

Found 47 papers, 20 papers with code

Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood

no code implementations25 Feb 2024 Rayen Dhahri, Alexander Immer, Betrand Charpentier, Stephan Günnemann, Vincent Fortuin

Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to na\"ively deploy on consumer hardware.

Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks

no code implementations30 Nov 2023 Alexander Möllers, Alexander Immer, Elvin Isufi, Vincent Fortuin

Graph contrastive learning has shown great promise when labeled data is scarce, but large unlabeled datasets are available.

Contrastive Learning Node Classification

Estimating optimal PAC-Bayes bounds with Hamiltonian Monte Carlo

no code implementations30 Oct 2023 Szilvia Ujváry, Gergely Flamich, Vincent Fortuin, José Miguel Hernández Lobato

An important yet underexplored question in the PAC-Bayes literature is how much tightness we lose by restricting the posterior family to factorized Gaussian distributions when optimizing a PAC-Bayes bound.

A Primer on Bayesian Neural Networks: Review and Debates

1 code implementation28 Sep 2023 Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin

Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks.

Bayesian Inference

Hodge-Aware Contrastive Learning

no code implementations14 Sep 2023 Alexander Möllers, Alexander Immer, Vincent Fortuin, Elvin Isufi

We leverage this decomposition to develop a contrastive self-supervised learning approach for processing simplicial data and generating embeddings that encapsulate specific spectral information. Specifically, we encode the pertinent data invariances through simplicial neural networks and devise augmentations that yield positive contrastive examples with suitable spectral properties for downstream tasks.

Contrastive Learning Self-Supervised Learning

Understanding Pathologies of Deep Heteroskedastic Regression

no code implementations29 Jun 2023 Eliot Wong-Toi, Alex Boyd, Vincent Fortuin, Stephan Mandt

Deep, overparameterized regression models are notorious for their tendency to overfit.

regression

Improving Neural Additive Models with Bayesian Principles

no code implementations26 May 2023 Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Rätsch, Vincent Fortuin

Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks.

Additive models Bayesian Inference +1

Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization

1 code implementation17 Apr 2023 Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Vincent Fortuin

The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.

Bayesian Optimization Decision Making +2

Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior: From Theory to Practice

no code implementations14 Nov 2022 Jonas Rothfuss, Martin Josifoski, Vincent Fortuin, Andreas Krause

Meta-Learning aims to speed up the learning process on new tasks by acquiring useful inductive biases from datasets of related learning tasks.

Gaussian Processes Meta-Learning +1

Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

1 code implementation22 Feb 2022 Alexander Immer, Tycho F. A. van der Ouderaa, Gunnar Rätsch, Vincent Fortuin, Mark van der Wilk

We develop a convenient gradient-based method for selecting the data augmentation without validation data during training of a deep neural network.

Data Augmentation Gaussian Processes +1

Meta-learning richer priors for VAEs

no code implementations pproximateinference AABI Symposium 2022 Marcello Massimo Negri, Vincent Fortuin, Jan Stuehmer

Variational auto-encoders have proven to capture complicated data distributions and useful latent representations, while advances in meta-learning have made it possible to extract prior knowledge from data.

Meta-Learning

Probing as Quantifying Inductive Bias

1 code implementation ACL 2022 Alexander Immer, Lucas Torroba Hennigen, Vincent Fortuin, Ryan Cotterell

Such performance improvements have motivated researchers to quantify and understand the linguistic information encoded in these representations.

Bayesian Inference Inductive Bias

Pathologies in priors and inference for Bayesian transformers

no code implementations NeurIPS Workshop ICBINB 2021 Tristan Cinquin, Alexander Immer, Max Horn, Vincent Fortuin

In recent years, the transformer has established itself as a workhorse in many applications ranging from natural language processing to reinforcement learning.

Bayesian Inference Variational Inference

Sparse MoEs meet Efficient Ensembles

1 code implementation7 Oct 2021 James Urquhart Allingham, Florian Wenzel, Zelda E Mariet, Basil Mustafa, Joan Puigcerver, Neil Houlsby, Ghassen Jerfel, Vincent Fortuin, Balaji Lakshminarayanan, Jasper Snoek, Dustin Tran, Carlos Riquelme Ruiz, Rodolphe Jenatton

Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, often exhibit strong performance compared to individual models.

Few-Shot Learning

Deep Classifiers with Label Noise Modeling and Distance Awareness

no code implementations6 Oct 2021 Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications.

Out-of-Distribution Detection

Neural Variational Gradient Descent

2 code implementations pproximateinference AABI Symposium 2022 Lauro Langosco di Langosco, Vincent Fortuin, Heiko Strathmann

Particle-based approximate Bayesian inference approaches such as Stein Variational Gradient Descent (SVGD) combine the flexibility and convergence guarantees of sampling methods with the computational benefits of variational inference.

Bayesian Inference regression +1

A Bayesian Approach to Invariant Deep Neural Networks

no code implementations20 Jul 2021 Nikolaos Mourdoukoutas, Marco Federici, Georges Pantalos, Mark van der Wilk, Vincent Fortuin

We propose a novel Bayesian neural network architecture that can learn invariances from data alone by inferring a posterior distribution over different weight-sharing schemes.

Data Augmentation

Repulsive Deep Ensembles are Bayesian

1 code implementation NeurIPS 2021 Francesco D'Angelo, Vincent Fortuin

Deep ensembles have recently gained popularity in the deep learning community for their conceptual simplicity and efficiency.

Bayesian Inference

On Stein Variational Neural Network Ensembles

no code implementations20 Jun 2021 Francesco D'Angelo, Vincent Fortuin, Florian Wenzel

Ensembles of deep neural networks have achieved great success recently, but they do not offer a proper Bayesian justification.

Priors in Bayesian Deep Learning: A Review

no code implementations14 May 2021 Vincent Fortuin

While the choice of prior is one of the most critical parts of the Bayesian inference workflow, recent Bayesian deep learning models have often fallen back on vague priors, such as standard Gaussians.

Bayesian Inference Gaussian Processes

BNNpriors: A library for Bayesian neural network inference with different prior distributions

1 code implementation14 May 2021 Vincent Fortuin, Adrià Garriga-Alonso, Mark van der Wilk, Laurence Aitchison

Bayesian neural networks have shown great promise in many applications where calibrated uncertainty estimates are crucial and can often also lead to a higher predictive performance.

Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning

1 code implementation11 Apr 2021 Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan

Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.

Image Classification Model Selection +2

On Disentanglement in Gaussian Process Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2022 Simon Bing, Vincent Fortuin, Gunnar Rätsch

While many models have been introduced to learn such disentangled representations, only few attempt to explicitly exploit the structure of sequential data.

Disentanglement Time Series +1

Exact Langevin Dynamics with Stochastic Gradients

no code implementations pproximateinference AABI Symposium 2021 Adrià Garriga-Alonso, Vincent Fortuin

Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inference, but they are generally biased.

Annealed Stein Variational Gradient Descent

no code implementations pproximateinference AABI Symposium 2021 Francesco D'Angelo, Vincent Fortuin

Particle based optimization algorithms have recently been developed as sampling methods that iteratively update a set of particles to approximate a target distribution.

Factorized Gaussian Process Variational Autoencoders

1 code implementation pproximateinference AABI Symposium 2021 Metod Jazbec, Michael Pearce, Vincent Fortuin

Variational autoencoders often assume isotropic Gaussian priors and mean-field posteriors, hence do not exploit structure in scenarios where we may expect similarity or consistency across latent variables.

Scalable Gaussian Process Variational Autoencoders

1 code implementation26 Oct 2020 Metod Jazbec, Matthew Ashman, Vincent Fortuin, Michael Pearce, Stephan Mandt, Gunnar Rätsch

Conventional variational autoencoders fail in modeling correlations between data points due to their use of factorized priors.

DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps

2 code implementations3 Oct 2019 Laura Manduchi, Matthias Hüser, Julia Vogt, Gunnar Rätsch, Vincent Fortuin

We show that DPSOM achieves superior clustering performance compared to current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the favourable visualization properties of SOMs.

Clustering Deep Clustering +4

MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis

1 code implementation27 Sep 2019 Margherita Rosnati, Vincent Fortuin

With a mortality rate of 5. 4 million lives worldwide every year and a healthcare cost of more than 16 billion dollars in the USA alone, sepsis is one of the leading causes of hospital mortality and an increasing concern in the ageing western world.

BIG-bench Machine Learning Interpretable Machine Learning

Variational pSOM: Deep Probabilistic Clustering with Self-Organizing Maps

no code implementations25 Sep 2019 Laura Manduchi, Matthias Hüser, Gunnar Rätsch, Vincent Fortuin

There are very performant deep clustering models on the one hand and interpretable representation learning techniques, often relying on latent topological structures such as self-organizing maps, on the other hand.

Clustering Deep Clustering +3

GP-VAE: Deep Probabilistic Time Series Imputation

2 code implementations9 Jul 2019 Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt

Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.

Dimensionality Reduction Multivariate Time Series Imputation +2

Meta-Learning Mean Functions for Gaussian Processes

no code implementations23 Jan 2019 Vincent Fortuin, Heiko Strathmann, Gunnar Rätsch

When it comes to meta-learning in Gaussian process models, approaches in this setting have mostly focused on learning the kernel function of the prior, but not on learning its mean function.

Gaussian Processes Meta-Learning

Scalable Gaussian Processes on Discrete Domains

no code implementations24 Oct 2018 Vincent Fortuin, Gideon Dresdner, Heiko Strathmann, Gunnar Rätsch

We explore different techniques for selecting inducing points on discrete domains, including greedy selection, determinantal point processes, and simulated annealing.

Gaussian Processes Point Processes

SOM-VAE: Interpretable Discrete Representation Learning on Time Series

6 code implementations ICLR 2019 Vincent Fortuin, Matthias Hüser, Francesco Locatello, Heiko Strathmann, Gunnar Rätsch

We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set.

Clustering Dimensionality Reduction +3

Cannot find the paper you are looking for? You can Submit a new open access paper.