no code implementations • 28 Feb 2024 • Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van Den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin
The field of deep generative modeling has grown rapidly and consistently over the years.
no code implementations • 18 Oct 2023 • Junaid Ali, Matthaeus Kleindessner, Florian Wenzel, Kailash Budhathoki, Volkan Cevher, Chris Russell
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP), that are used for labeling tasks.
no code implementations • 20 Apr 2023 • Max F. Burg, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, Chris Russell
Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification.
no code implementations • NeurIPS 2023 • Marco Fumero, Florian Wenzel, Luca Zancato, Alessandro Achille, Emanuele Rodolà, Stefano Soatto, Bernhard Schölkopf, Francesco Locatello
Recovering the latent factors of variation of high dimensional data has so far focused on simple synthetic settings.
1 code implementation • 4 Mar 2023 • Charlotte Loh, Seungwook Han, Shivchander Sudalairaj, Rumen Dangovski, Kai Xu, Florian Wenzel, Marin Soljacic, Akash Srivastava
In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters.
no code implementations • 15 Dec 2022 • JieLin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, Mu Li
Multimodal image-text models have shown remarkable performance in the past few years.
1 code implementation • 19 Jul 2022 • Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.
Adversarial Robustness Out-of-Distribution Generalization +1
1 code implementation • 7 Oct 2021 • James Urquhart Allingham, Florian Wenzel, Zelda E Mariet, Basil Mustafa, Joan Puigcerver, Neil Houlsby, Ghassen Jerfel, Vincent Fortuin, Balaji Lakshminarayanan, Jasper Snoek, Dustin Tran, Carlos Riquelme Ruiz, Rodolphe Jenatton
Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, often exhibit strong performance compared to individual models.
no code implementations • 6 Oct 2021 • Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou
Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications.
no code implementations • 20 Jun 2021 • Francesco D'Angelo, Vincent Fortuin, Florian Wenzel
Ensembles of deep neural networks have achieved great success recently, but they do not offer a proper Bayesian justification.
3 code implementations • 7 Jun 2021 • Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W. Dusenberry, Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim G. J. Rudner, Faris Sbahi, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran
In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks.
1 code implementation • NeurIPS Workshop ICBINB 2020 • Vincent Fortuin, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, Laurence Aitchison
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference.
no code implementations • pproximateinference AABI Symposium 2021 • Zelda E Mariet, Rodolphe Jenatton, Florian Wenzel, Dustin Tran
We seek to bridge the performance gap between batch ensembles (ensembles of deep networks with shared parameters) and deep ensembles on tasks which require not only predictions, but also uncertainty estimates for these predictions.
3 code implementations • NeurIPS 2020 • Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton
Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration.
1 code implementation • 26 Feb 2020 • Théo Galy-Fajou, Florian Wenzel, Manfred Opper
Building on the conjugate structure of the augmented model, we develop two inference methods.
1 code implementation • ICML 2020 • Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.
3 code implementations • 23 May 2019 • Théo Galy-Fajou, Florian Wenzel, Christian Donner, Manfred Opper
We propose a new scalable multi-class Gaussian process classification approach building on a novel modified softmax likelihood function.
no code implementations • ICML 2018 • Alexander Buchholz, Florian Wenzel, Stephan Mandt
We also propose a new algorithm for Monte Carlo objectives, where we operate with a constant learning rate and increase the number of QMC samples per iteration.
1 code implementation • 21 Mar 2018 • Patrick Jähnichen, Florian Wenzel, Marius Kloft, Stephan Mandt
First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs).
3 code implementations • 18 Feb 2018 • Florian Wenzel, Theo Galy-Fajou, Christan Donner, Marius Kloft, Manfred Opper
We propose a scalable stochastic variational approach to GP classification building on Polya-Gamma data augmentation and inducing points.
3 code implementations • 18 Jul 2017 • Florian Wenzel, Theo Galy-Fajou, Matthaeus Deutsch, Marius Kloft
We propose a fast inference method for Bayesian nonlinear support vector machines that leverages stochastic variational inference and inducing points.
no code implementations • 16 Jul 2015 • Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft
Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.