Search Results for author: Florian Wenzel

Found 22 papers, 12 papers with code

Evaluating the Fairness of Discriminative Foundation Models in Computer Vision

no code implementations18 Oct 2023 Junaid Ali, Matthaeus Kleindessner, Florian Wenzel, Kailash Budhathoki, Volkan Cevher, Chris Russell

We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP), that are used for labeling tasks.

Fairness Image Captioning +2

Image retrieval outperforms diffusion models on data augmentation

no code implementations20 Apr 2023 Max F. Burg, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, Chris Russell

Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification.

Data Augmentation Image Retrieval +2

Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries

1 code implementation4 Mar 2023 Charlotte Loh, Seungwook Han, Shivchander Sudalairaj, Rumen Dangovski, Kai Xu, Florian Wenzel, Marin Soljacic, Akash Srivastava

In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters.

Representation Learning Uncertainty Quantification

Assaying Out-Of-Distribution Generalization in Transfer Learning

1 code implementation19 Jul 2022 Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello

Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.

Adversarial Robustness Out-of-Distribution Generalization +1

Sparse MoEs meet Efficient Ensembles

1 code implementation7 Oct 2021 James Urquhart Allingham, Florian Wenzel, Zelda E Mariet, Basil Mustafa, Joan Puigcerver, Neil Houlsby, Ghassen Jerfel, Vincent Fortuin, Balaji Lakshminarayanan, Jasper Snoek, Dustin Tran, Carlos Riquelme Ruiz, Rodolphe Jenatton

Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, often exhibit strong performance compared to individual models.

Few-Shot Learning

Deep Classifiers with Label Noise Modeling and Distance Awareness

no code implementations6 Oct 2021 Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications.

Out-of-Distribution Detection

On Stein Variational Neural Network Ensembles

no code implementations20 Jun 2021 Francesco D'Angelo, Vincent Fortuin, Florian Wenzel

Ensembles of deep neural networks have achieved great success recently, but they do not offer a proper Bayesian justification.

Distilling Ensembles Improves Uncertainty Estimates

no code implementations pproximateinference AABI Symposium 2021 Zelda E Mariet, Rodolphe Jenatton, Florian Wenzel, Dustin Tran

We seek to bridge the performance gap between batch ensembles (ensembles of deep networks with shared parameters) and deep ensembles on tasks which require not only predictions, but also uncertainty estimates for these predictions.

Hyperparameter Ensembles for Robustness and Uncertainty Quantification

3 code implementations NeurIPS 2020 Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration.

Image Classification Uncertainty Quantification

How Good is the Bayes Posterior in Deep Neural Networks Really?

1 code implementation ICML 2020 Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin

In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.

Bayesian Inference Uncertainty Quantification

Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation

3 code implementations23 May 2019 Théo Galy-Fajou, Florian Wenzel, Christian Donner, Manfred Opper

We propose a new scalable multi-class Gaussian process classification approach building on a novel modified softmax likelihood function.

Bayesian Inference Data Augmentation +2

Quasi-Monte Carlo Variational Inference

no code implementations ICML 2018 Alexander Buchholz, Florian Wenzel, Stephan Mandt

We also propose a new algorithm for Monte Carlo objectives, where we operate with a constant learning rate and increase the number of QMC samples per iteration.

Variational Inference

Scalable Generalized Dynamic Topic Models

1 code implementation21 Mar 2018 Patrick Jähnichen, Florian Wenzel, Marius Kloft, Stephan Mandt

First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs).

Event Detection Gaussian Processes +2

Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation

3 code implementations18 Feb 2018 Florian Wenzel, Theo Galy-Fajou, Christan Donner, Marius Kloft, Manfred Opper

We propose a scalable stochastic variational approach to GP classification building on Polya-Gamma data augmentation and inducing points.

Classification Data Augmentation +1

Bayesian Nonlinear Support Vector Machines for Big Data

3 code implementations18 Jul 2017 Florian Wenzel, Theo Galy-Fajou, Matthaeus Deutsch, Marius Kloft

We propose a fast inference method for Bayesian nonlinear support vector machines that leverages stochastic variational inference and inducing points.

Variational Inference

Sparse Probit Linear Mixed Model

no code implementations16 Jul 2015 Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft

Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.

feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.