no code implementations • 12 Mar 2024 • Michael Götz, Christian Weber, Franciszek Binczyk, Joanna Polanska, Rafal Tarnawski, Barbara Bobek-Billewicz, Ullrich Köthe, Jens Kleesiek, Bram Stieltjes, Klaus H. Maier-Hein

We propose a new method that employs transfer learning techniques to effectively correct sampling selection errors introduced by sparse annotations during supervised learning for automated tumor segmentation.

no code implementations • 9 Feb 2024 • Felix Draxler, Stefan Wahl, Christoph Schnörr, Ullrich Köthe

We present a novel theoretical framework for understanding the expressive power of coupling-based normalizing flows such as RealNVP.

1 code implementation • 15 Dec 2023 • Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Ullrich Köthe

Many real world data, particularly in the natural sciences and computer vision, lie on known Riemannian manifolds such as spheres, tori or the group of rotation matrices.

no code implementations • 15 Dec 2023 • Jens Müller, Lars Kühmichel, Martin Rohbeck, Stefan T. Radev, Ullrich Köthe

In this work, we analyze the conditions under which information about the context of an input $X$ can improve the predictions of deep learning models in new domains.

no code implementations • 9 Dec 2023 • Marvin Schmitt, Valentin Pratz, Ullrich Köthe, Paul-Christian Bürkner, Stefan T Radev

Simulation-based inference (SBI) is constantly in search of more expressive algorithms for accurately inferring the parameters of complex models from noisy data.

1 code implementation • 25 Oct 2023 • Felix Draxler, Peter Sorrenson, Lea Zimmermann, Armand Rousselot, Ullrich Köthe

Normalizing Flows are generative models that directly maximize the likelihood.

no code implementations • 17 Oct 2023 • Lasse Elsemüller, Hans Olischläger, Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev

In this work, we propose sensitivity-aware amortized Bayesian inference (SA-ABI), a multifaceted approach to efficiently integrate sensitivity analyses into simulation-based inference with neural networks.

no code implementations • 6 Oct 2023 • Marvin Schmitt, Desi R. Ivanova, Daniel Habermann, Ullrich Köthe, Paul-Christian Bürkner, Stefan T. Radev

We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data.

no code implementations • 18 Sep 2023 • Tim J. Adler, Jan-Hinrich Nölke, Annika Reinke, Minu Dietlinde Tizabi, Sebastian Gruber, Dasha Trofimova, Lynton Ardizzone, Paul F. Jaeger, Florian Buettner, Ullrich Köthe, Lena Maier-Hein

Current deep learning-based solutions for image analysis tasks are commonly incapable of handling problems to which multiple different plausible solutions exist.

no code implementations • 4 Aug 2023 • Ullrich Köthe

Change-of-variables (CoV) formulas allow to reduce complicated probability densities to simpler ones by a learned transformation with tractable Jacobian determinant.

1 code implementation • 28 Jun 2023 • Stefan T Radev, Marvin Schmitt, Lukas Schumacher, Lasse Elsemüller, Valentin Pratz, Yannik Schälte, Ullrich Köthe, Paul-Christian Bürkner

Modern Bayesian inference involves a mixture of computational techniques for estimating, validating, and drawing conclusions from probabilistic models as part of principled workflows for data analysis.

1 code implementation • 23 Jun 2023 • Felix Draxler, Lars Kühmichel, Armand Rousselot, Jens Müller, Christoph Schnörr, Ullrich Köthe

Gaussianization is a simple generative model that can be trained without backpropagation.

2 code implementations • 2 Jun 2023 • Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Lea Zimmermann, Ullrich Köthe

Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data.

1 code implementation • 20 Mar 2023 • The-Gia Leo Nguyen, Lynton Ardizzone, Ullrich Köthe

Autoencoders are able to learn useful data representations in an unsupervised matter and have been widely used in various machine learning and computer vision tasks.

1 code implementation • 17 Mar 2023 • Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Köthe

We investigate a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution.

no code implementations • 17 Mar 2023 • Kris K. Dreher, Leonardo Ayala, Melanie Schellenberg, Marco Hübner, Jan-Hinrich Nölke, Tim J. Adler, Silvia Seidlitz, Jan Sellner, Alexander Studier-Fischer, Janek Gröhl, Felix Nickel, Ullrich Köthe, Alexander Seitel, Lena Maier-Hein

Synthetic medical image generation has evolved as a key technique for neural network training and validation.

3 code implementations • 17 Feb 2023 • Stefan T. Radev, Marvin Schmitt, Valentin Pratz, Umberto Picchini, Ullrich Köthe, Paul-Christian Bürkner

This work proposes ``jointly amortized neural approximation'' (JANA) of intractable likelihood functions and posterior densities arising in Bayesian surrogate modeling and simulation-based inference.

no code implementations • 31 Jan 2023 • Jonathan Wider, Jakob Kruse, Nils Weitzel, Janica C. Bühler, Ullrich Köthe, Kira Rehfeld

Simulating abundances of stable water isotopologues, i. e. molecules differing in their isotopic composition, within climate models allows for comparisons with proxy data and, thus, for testing hypotheses about past climate and validating climate models under varying climatic conditions.

2 code implementations • 23 Nov 2022 • Lukas Schumacher, Paul-Christian Bürkner, Andreas Voss, Ullrich Köthe, Stefan T. Radev

Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model.

2 code implementations • 25 Oct 2022 • Felix Draxler, Christoph Schnörr, Ullrich Köthe

For the first time, we make a quantitative statement about this kind of convergence: We prove that all coupling-based normalizing flows perform whitening of the data distribution (i. e. diagonalize the covariance matrix) and derive corresponding convergence bounds that show a linear convergence rate in the depth of the flow.

no code implementations • 30 Aug 2022 • Robert Schmier, Ullrich Köthe, Christoph-Nikolas Straehle

We use a self-supervised feature extractor trained on the auxiliary dataset and train a normalizing flow on the extracted features by maximizing the likelihood on in-distribution data and minimizing the likelihood on the contrastive dataset.

1 code implementation • 29 Jul 2022 • Malte Tölle, Ullrich Köthe, Florian André, Benjamin Meder, Sandy Engelhardt

Manipulation of the latent space leads to a modified image while preserving important details.

no code implementations • CVPR 2022 • Titus Leistner, Radek Mackowiak, Lynton Ardizzone, Ullrich Köthe, Carsten Rother

We argue that this is due current methods only considering a single "true" depth, even when multiple objects at different depths contributed to the color of a single pixel.

2 code implementations • 16 Dec 2021 • Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev

Neural density estimators have proven remarkably powerful in performing efficient simulation-based Bayesian inference in various research domains.

1 code implementation • 5 May 2021 • Lynton Ardizzone, Jakob Kruse, Carsten Lüth, Niels Bracher, Carsten Rother, Ullrich Köthe

We introduce a new architecture called a conditional invertible neural network (cINN), and use it to address the task of diverse image-to-image translation for natural images.

no code implementations • 26 Jan 2021 • Jakob Kruse, Lynton Ardizzone, Carsten Rother, Ullrich Köthe

Recent work demonstrated that flow-based invertible neural networks are promising tools for solving ambiguous inverse problems.

no code implementations • 10 Nov 2020 • Jan-Hinrich Nölke, Tim Adler, Janek Gröhl, Thomas Kirchner, Lynton Ardizzone, Carsten Rother, Ullrich Köthe, Lena Maier-Hein

Multispectral photoacoustic imaging (PAI) is an emerging imaging modality which enables the recovery of functional tissue parameters such as blood oxygenation.

no code implementations • 14 Oct 2020 • Jens Müller, Robert Schmier, Lynton Ardizzone, Carsten Rother, Ullrich Köthe

Standard supervised learning breaks down under data distribution shift.

1 code implementation • 1 Oct 2020 • Stefan T. Radev, Frederik Graw, Simiao Chen, Nico T. Mutters, Vanessa M. Eichel, Till Bärnighausen, Ullrich Köthe

Mathematical models in epidemiology are an indispensable tool to determine the dynamics and important characteristics of infectious diseases.

2 code implementations • CVPR 2021 • Radek Mackowiak, Lynton Ardizzone, Ullrich Köthe, Carsten Rother

Generative classifiers (GCs) are a promising class of models that are said to naturally accomplish these qualities.

1 code implementation • 22 Apr 2020 • Stefan T. Radev, Marco D'Alessandro, Ulf K. Mertens, Andreas Voss, Ullrich Köthe, Paul-Christian Bürkner

This makes the method particularly effective in scenarios where model fit needs to be assessed for a large number of datasets, so that per-dataset inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems.

2 code implementations • 13 Mar 2020 • Stefan T. Radev, Ulf K. Mertens, Andreass Voss, Lynton Ardizzone, Ullrich Köthe

In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics.

3 code implementations • NeurIPS 2020 • Lynton Ardizzone, Radek Mackowiak, Carsten Rother, Ullrich Köthe

In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of {\em controlled} information loss allows for an asymptotically exact formulation of the IB, while keeping the INN's generative capabilities intact.

1 code implementation • ICLR 2020 • Peter Sorrenson, Carsten Rother, Ullrich Köthe

Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation.

no code implementations • 5 Nov 2019 • Tim J. Adler, Leonardo Ayala, Lynton Ardizzone, Hannes G. Kenngott, Anant Vemuri, Beat P. Müller-Stich, Carsten Rother, Ullrich Köthe, Lena Maier-Hein

Multispectral optical imaging is becoming a key tool in the operating room.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

no code implementations • 25 Sep 2019 • Lynton Ardizzone, Carsten Lüth, Jakob Kruse, Carsten Rother, Ullrich Köthe

In this work, we address the task of natural image generation guided by a conditioning input.

no code implementations • 23 Sep 2019 • Ricard Durall, Franz-Josef Pfreundt, Ullrich Köthe, Janis Keuper

Recent deep learning based approaches have shown remarkable success on object segmentation tasks.

6 code implementations • 4 Jul 2019 • Lynton Ardizzone, Carsten Lüth, Jakob Kruse, Carsten Rother, Ullrich Köthe

We demonstrate these properties for the tasks of MNIST digit generation and image colorization.

1 code implementation • 25 May 2019 • Jakob Kruse, Gianluca Detommaso, Ullrich Köthe, Robert Scheichl

Many recent invertible neural architectures are based on coupling block designs where variables are divided in two subsets which serve as inputs of an easily invertible (usually affine) triangular transformation.

no code implementations • 25 Apr 2019 • Steffen Wolf, Alberto Bailoni, Constantin Pape, Nasim Rahaman, Anna Kreshuk, Ullrich Köthe, Fred A. Hamprecht

Unlike seeded watershed, the algorithm can accommodate not only attractive but also repulsive cues, allowing it to find a previously unspecified number of segments without the need for explicit seeds or a tunable threshold.

no code implementations • 8 Mar 2019 • Tim J. Adler, Lynton Ardizzone, Anant Vemuri, Leonardo Ayala, Janek Gröhl, Thomas Kirchner, Sebastian Wirkert, Jakob Kruse, Carsten Rother, Ullrich Köthe, Lena Maier-Hein

Assessment of the specific hardware used in conjunction with such algorithms, however, has not properly addressed the possibility that the problem may be ill-posed.

2 code implementations • ICLR 2019 • Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahner, Eric W. Pellegrini, Ralf S. Klessen, Lena Maier-Hein, Carsten Rother, Ullrich Köthe

Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters.

no code implementations • ICCV 2017 • Steffen Wolf, Lukas Schott, Ullrich Köthe, Fred Hamprecht

Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.