Search Results for author: Matthias Bauer

Found 17 papers, 6 papers with code

Improving fine-grained understanding in image-text pre-training

no code implementations18 Jan 2024 Ioana Bica, Anastasija Ilić, Matthias Bauer, Goker Erdogan, Matko Bošnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrović

We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs.

object-detection Object Detection

C3: High-performance and low-complexity neural compression from a single image or video

no code implementations5 Dec 2023 Hyunjik Kim, Matthias Bauer, Lucas Theis, Jonathan Richard Schwarz, Emilien Dupont

On the UVG video benchmark, we match the RD performance of the Video Compression Transformer (Mentzer et al.), a well-established neural video codec, with less than 5k MACs/pixel for decoding.

Video Compression

Spatial Functa: Scaling Functa to ImageNet Classification and Generation

no code implementations6 Feb 2023 Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, Hyunjik Kim

Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities.

Classification Image Generation

Regularising for invariance to data augmentation improves supervised learning

no code implementations7 Mar 2022 Aleksander Botev, Matthias Bauer, Soham De

Data augmentation is used in machine learning to make the classifier invariant to label-preserving transformations.

Data Augmentation

Laplace Redux -- Effortless Bayesian Deep Learning

3 code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection +1

Laplace Redux - Effortless Bayesian Deep Learning

no code implementations NeurIPS 2021 Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

Misconceptions Model Selection +1

Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning

1 code implementation11 Apr 2021 Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan

Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.

Image Classification Model Selection +2

Generalized Doubly Reparameterized Gradient Estimators

no code implementations26 Jan 2021 Matthias Bauer, andriy mnih

Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders.

Generalized Doubly-Reparameterized Gradient Estimators

no code implementations pproximateinference AABI Symposium 2021 Matthias Bauer, andriy mnih

Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders.

Improving predictions of Bayesian neural nets via local linearization

1 code implementation19 Aug 2020 Alexander Immer, Maciej Korzepa, Matthias Bauer

The generalized Gauss-Newton (GGN) approximation is often used to make practical Bayesian deep learning approaches scalable by replacing a second order derivative with a product of first order derivatives.

Out-of-Distribution Detection

Interpretable and Differentially Private Predictions

1 code implementation5 Jun 2019 Frederik Harder, Matthias Bauer, Mijung Park

Interpretable predictions, where it is clear why a machine learning model has made a particular decision, can compromise privacy by revealing the characteristics of individual data points.

General Classification

Resampled Priors for Variational Autoencoders

no code implementations26 Oct 2018 Matthias Bauer, andriy mnih

We propose Learned Accept/Reject Sampling (LARS), a method for constructing richer priors using rejection sampling with a learned acceptance function.

Learning Invariances using the Marginal Likelihood

no code implementations NeurIPS 2018 Mark van der Wilk, Matthias Bauer, ST John, James Hensman

Generalising well in supervised learning tasks relies on correctly extrapolating the training data to a large region of the input space.

Data Augmentation Gaussian Processes +2

Meta-Learning Probabilistic Inference For Prediction

1 code implementation ICLR 2019 Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E. Turner

2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass.

Few-Shot Image Classification Few-Shot Learning

Automatic Estimation of Modulation Transfer Functions

1 code implementation4 May 2018 Matthias Bauer, Valentin Volchkov, Michael Hirsch, Bernhard Schölkopf

The modulation transfer function (MTF) is widely used to characterise the performance of optical systems.

Understanding Probabilistic Sparse Gaussian Process Approximations

no code implementations NeurIPS 2016 Matthias Bauer, Mark van der Wilk, Carl Edward Rasmussen

Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets.

Gaussian Processes regression

Cannot find the paper you are looking for? You can Submit a new open access paper.