Search Results for author: Ullrich Köthe

Found 43 papers, 22 papers with code

DALSA: Domain Adaptation for Supervised Learning From Sparsely Annotated MR Images

no code implementations12 Mar 2024 Michael Götz, Christian Weber, Franciszek Binczyk, Joanna Polanska, Rafal Tarnawski, Barbara Bobek-Billewicz, Ullrich Köthe, Jens Kleesiek, Bram Stieltjes, Klaus H. Maier-Hein

We propose a new method that employs transfer learning techniques to effectively correct sampling selection errors introduced by sparse annotations during supervised learning for automated tumor segmentation.

Domain Adaptation Transfer Learning +1

On the Universality of Coupling-based Normalizing Flows

no code implementations9 Feb 2024 Felix Draxler, Stefan Wahl, Christoph Schnörr, Ullrich Köthe

We present a novel theoretical framework for understanding the expressive power of coupling-based normalizing flows such as RealNVP.

Learning Distributions on Manifolds with Free-form Flows

1 code implementation15 Dec 2023 Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Ullrich Köthe

Many real world data, particularly in the natural sciences and computer vision, lie on known Riemannian manifolds such as spheres, tori or the group of rotation matrices.

Towards Context-Aware Domain Generalization: Understanding the Benefits and Limits of Marginal Transfer Learning

no code implementations15 Dec 2023 Jens Müller, Lars Kühmichel, Martin Rohbeck, Stefan T. Radev, Ullrich Köthe

In this work, we analyze the conditions under which information about the context of an input $X$ can improve the predictions of deep learning models in new domains.

Domain Generalization Transfer Learning

Consistency Models for Scalable and Fast Simulation-Based Inference

no code implementations9 Dec 2023 Marvin Schmitt, Valentin Pratz, Ullrich Köthe, Paul-Christian Bürkner, Stefan T Radev

Simulation-based inference (SBI) is constantly in search of more expressive algorithms for accurately inferring the parameters of complex models from noisy data.

Denoising

Sensitivity-Aware Amortized Bayesian Inference

no code implementations17 Oct 2023 Lasse Elsemüller, Hans Olischläger, Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev

In this work, we propose sensitivity-aware amortized Bayesian inference (SA-ABI), a multifaceted approach to efficiently integrate sensitivity analyses into simulation-based inference with neural networks.

Bayesian Inference Decision Making

Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference

no code implementations6 Oct 2023 Marvin Schmitt, Desi R. Ivanova, Daniel Habermann, Ullrich Köthe, Paul-Christian Bürkner, Stefan T. Radev

We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data.

Bayesian Inference

A Review of Change of Variable Formulas for Generative Modeling

no code implementations4 Aug 2023 Ullrich Köthe

Change-of-variables (CoV) formulas allow to reduce complicated probability densities to simpler ones by a learned transformation with tractable Jacobian determinant.

Bayesian Inference Model Selection +1

BayesFlow: Amortized Bayesian Workflows With Neural Networks

1 code implementation28 Jun 2023 Stefan T Radev, Marvin Schmitt, Lukas Schumacher, Lasse Elsemüller, Valentin Pratz, Yannik Schälte, Ullrich Köthe, Paul-Christian Bürkner

Modern Bayesian inference involves a mixture of computational techniques for estimating, validating, and drawing conclusions from probabilistic models as part of principled workflows for data analysis.

Bayesian Inference Data Compression

Lifting Architectural Constraints of Injective Flows

2 code implementations2 Jun 2023 Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Lea Zimmermann, Ullrich Köthe

Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data.

Training Invertible Neural Networks as Autoencoders

1 code implementation20 Mar 2023 The-Gia Leo Nguyen, Lynton Ardizzone, Ullrich Köthe

Autoencoders are able to learn useful data representations in an unsupervised matter and have been widely used in various machine learning and computer vision tasks.

Finding Competence Regions in Domain Generalization

1 code implementation17 Mar 2023 Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Köthe

We investigate a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution.

Domain Generalization

JANA: Jointly Amortized Neural Approximation of Complex Bayesian Models

3 code implementations17 Feb 2023 Stefan T. Radev, Marvin Schmitt, Valentin Pratz, Umberto Picchini, Ullrich Köthe, Paul-Christian Bürkner

This work proposes ``jointly amortized neural approximation'' (JANA) of intractable likelihood functions and posterior densities arising in Bayesian surrogate modeling and simulation-based inference.

Time Series Time Series Analysis

Towards Learned Emulation of Interannual Water Isotopologue Variations in General Circulation Models

no code implementations31 Jan 2023 Jonathan Wider, Jakob Kruse, Nils Weitzel, Janica C. Bühler, Ullrich Köthe, Kira Rehfeld

Simulating abundances of stable water isotopologues, i. e. molecules differing in their isotopic composition, within climate models allows for comparisons with proxy data and, thus, for testing hypotheses about past climate and validating climate models under varying climatic conditions.

Whitening Convergence Rate of Coupling-based Normalizing Flows

2 code implementations25 Oct 2022 Felix Draxler, Christoph Schnörr, Ullrich Köthe

For the first time, we make a quantitative statement about this kind of convergence: We prove that all coupling-based normalizing flows perform whitening of the data distribution (i. e. diagonalize the covariance matrix) and derive corresponding convergence bounds that show a linear convergence rate in the depth of the flow.

Positive Difference Distribution for Image Outlier Detection using Normalizing Flows and Contrastive Data

no code implementations30 Aug 2022 Robert Schmier, Ullrich Köthe, Christoph-Nikolas Straehle

We use a self-supervised feature extractor trained on the auxiliary dataset and train a normalizing flow on the extracted features by maximizing the likelihood on in-distribution data and minimizing the likelihood on the contrastive dataset.

Anomaly Detection Outlier Detection

Towards Multimodal Depth Estimation from Light Fields

no code implementations CVPR 2022 Titus Leistner, Radek Mackowiak, Lynton Ardizzone, Ullrich Köthe, Carsten Rother

We argue that this is due current methods only considering a single "true" depth, even when multiple objects at different depths contributed to the color of a single pixel.

Depth Estimation Depth Prediction

Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks

2 code implementations16 Dec 2021 Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev

Neural density estimators have proven remarkably powerful in performing efficient simulation-based Bayesian inference in various research domains.

Bayesian Inference Decision Making +1

Conditional Invertible Neural Networks for Diverse Image-to-Image Translation

1 code implementation5 May 2021 Lynton Ardizzone, Jakob Kruse, Carsten Lüth, Niels Bracher, Carsten Rother, Ullrich Köthe

We introduce a new architecture called a conditional invertible neural network (cINN), and use it to address the task of diverse image-to-image translation for natural images.

Colorization Image Colorization +2

Benchmarking Invertible Architectures on Inverse Problems

no code implementations26 Jan 2021 Jakob Kruse, Lynton Ardizzone, Carsten Rother, Ullrich Köthe

Recent work demonstrated that flow-based invertible neural networks are promising tools for solving ambiguous inverse problems.

Benchmarking

Invertible Neural Networks for Uncertainty Quantification in Photoacoustic Imaging

no code implementations10 Nov 2020 Jan-Hinrich Nölke, Tim Adler, Janek Gröhl, Thomas Kirchner, Lynton Ardizzone, Carsten Rother, Ullrich Köthe, Lena Maier-Hein

Multispectral photoacoustic imaging (PAI) is an emerging imaging modality which enables the recovery of functional tissue parameters such as blood oxygenation.

Uncertainty Quantification

Amortized Bayesian model comparison with evidential deep learning

1 code implementation22 Apr 2020 Stefan T. Radev, Marco D'Alessandro, Ulf K. Mertens, Andreas Voss, Ullrich Köthe, Paul-Christian Bürkner

This makes the method particularly effective in scenarios where model fit needs to be assessed for a large number of datasets, so that per-dataset inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems.

BayesFlow: Learning complex stochastic models with invertible neural networks

2 code implementations13 Mar 2020 Stefan T. Radev, Ulf K. Mertens, Andreass Voss, Lynton Ardizzone, Ullrich Köthe

In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics.

Bayesian Inference Epidemiology

Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification

3 code implementations NeurIPS 2020 Lynton Ardizzone, Radek Mackowiak, Carsten Rother, Ullrich Köthe

In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of {\em controlled} information loss allows for an asymptotically exact formulation of the IB, while keeping the INN's generative capabilities intact.

General Classification Out-of-Distribution Detection +1

Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)

1 code implementation ICLR 2020 Peter Sorrenson, Carsten Rother, Ullrich Köthe

Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation.

Disentanglement

Object Segmentation using Pixel-wise Adversarial Loss

no code implementations23 Sep 2019 Ricard Durall, Franz-Josef Pfreundt, Ullrich Köthe, Janis Keuper

Recent deep learning based approaches have shown remarkable success on object segmentation tasks.

Object Segmentation +1

HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference

1 code implementation25 May 2019 Jakob Kruse, Gianluca Detommaso, Ullrich Köthe, Robert Scheichl

Many recent invertible neural architectures are based on coupling block designs where variables are divided in two subsets which serve as inputs of an easily invertible (usually affine) triangular transformation.

Bayesian Inference Density Estimation

The Mutex Watershed and its Objective: Efficient, Parameter-Free Graph Partitioning

no code implementations25 Apr 2019 Steffen Wolf, Alberto Bailoni, Constantin Pape, Nasim Rahaman, Anna Kreshuk, Ullrich Köthe, Fred A. Hamprecht

Unlike seeded watershed, the algorithm can accommodate not only attractive but also repulsive cues, allowing it to find a previously unspecified number of segments without the need for explicit seeds or a tunable threshold.

Clustering graph partitioning +1

Uncertainty-aware performance assessment of optical imaging modalities with invertible neural networks

no code implementations8 Mar 2019 Tim J. Adler, Lynton Ardizzone, Anant Vemuri, Leonardo Ayala, Janek Gröhl, Thomas Kirchner, Sebastian Wirkert, Jakob Kruse, Carsten Rother, Ullrich Köthe, Lena Maier-Hein

Assessment of the specific hardware used in conjunction with such algorithms, however, has not properly addressed the possibility that the problem may be ill-posed.

Analyzing Inverse Problems with Invertible Neural Networks

2 code implementations ICLR 2019 Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahner, Eric W. Pellegrini, Ralf S. Klessen, Lena Maier-Hein, Carsten Rother, Ullrich Köthe

Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters.

Learned Watershed: End-to-End Learning of Seeded Segmentation

no code implementations ICCV 2017 Steffen Wolf, Lukas Schott, Ullrich Köthe, Fred Hamprecht

Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm.

Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.