Search Results for author: Peter Maass

Found 19 papers, 10 papers with code

Score-Based Generative Models for PET Image Reconstruction

1 code implementation27 Aug 2023 Imraj RD Singh, Alexander Denker, Riccardo Barbano, Željko Kereta, Bangti Jin, Kris Thielemans, Peter Maass, Simon Arridge

Score-based generative models have demonstrated highly promising results for medical image reconstruction tasks in magnetic resonance imaging or computed tomography.

Image Reconstruction

SVD-DIP: Overcoming the Overfitting Problem in DIP-based CT Reconstruction

1 code implementation28 Mar 2023 Marco Nittscher, Michael Lameter, Riccardo Barbano, Johannes Leuschner, Bangti Jin, Peter Maass

The deep image prior (DIP) is a well-established unsupervised deep learning method for image reconstruction; yet it is far from being flawless.

Image Reconstruction

Model Stitching and Visualization How GAN Generators can Invert Networks in Real-Time

no code implementations4 Feb 2023 Rudolf Herdt, Maximilian Schmidt, Daniel Otero Baguer, Jean Le'Clerc Arrastia, Peter Maass

In this work, we propose a fast and accurate method to reconstruct activations of classification and semantic segmentation networks by stitching them with a GAN generator utilizing a 1x1 convolution.

Semantic Segmentation

Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems

1 code implementation6 Dec 2022 Derick Nganyu Tanyu, Jianfeng Ning, Tom Freudenberg, Nick Heilenkötter, Andreas Rademacher, Uwe Iben, Peter Maass

Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics.

SELTO: Sample-Efficient Learned Topology Optimization

no code implementations12 Sep 2022 Sören Dittmer, David Erzmann, Henrik Harms, Peter Maass

Recent developments in Deep Learning (DL) suggest a vast potential for Topology Optimization (TO).

PatchNR: Learning from Very Few Images by Patch Normalizing Flow Regularization

1 code implementation24 May 2022 Fabian Altekrüger, Alexander Denker, Paul Hagemann, Johannes Hertrich, Peter Maass, Gabriele Steidl

Learning neural networks using only few available information is an important ongoing research topic with tremendous potential for applications.

Computed Tomography (CT)

Conditional Invertible Neural Networks for Medical Imaging

2 code implementations26 Oct 2021 Alexander Denker, Maximilian Schmidt, Johannes Leuschner, Peter Maass

Over the last years, deep learning methods have become an increasingly popular choice to solve tasks from the field of inverse problems.

Ground Truth Free Denoising by Optimal Transport

1 code implementation3 Jul 2020 Sören Dittmer, Carola-Bibiane Schönlieb, Peter Maass

We present a learned unsupervised denoising method for arbitrary types of data, which we explore on images and one-dimensional signals.

Denoising Generative Adversarial Network

Deep image prior for 3D magnetic particle imaging: A quantitative comparison of regularization techniques on Open MPI dataset

no code implementations3 Jul 2020 Sören Dittmer, Tobias Kluth, Mads Thorstein Roar Henriksen, Peter Maass

Magnetic particle imaging (MPI) is an imaging modality exploiting the nonlinear magnetization behavior of (super-)paramagnetic nanoparticles to obtain a space- and often also time-dependent concentration of a tracer consisting of these nanoparticles.

Image Reconstruction

A Projectional Ansatz to Reconstruction

1 code implementation10 Jul 2019 Sören Dittmer, Peter Maass

Recently the field of inverse problems has seen a growing usage of mathematically only partially understood learned and non-learned priors.

Denoising

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

1 code implementation10 May 2019 Christian Etmann, Sebastian Lunz, Peter Maass, Carola-Bibiane Schönlieb

Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts.

Adversarial Robustness

Invariance and Inverse Stability under ReLU

no code implementations ICLR 2019 Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maass

We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping.

Regularization by architecture: A deep prior approach for inverse problems

2 code implementations10 Dec 2018 Sören Dittmer, Tobias Kluth, Peter Maass, Daniel Otero Baguer

The present paper studies so-called deep image prior (DIP) techniques in the context of ill-posed inverse problems.

Singular Values for ReLU Layers

no code implementations6 Dec 2018 Sören Dittmer, Emily J. King, Peter Maass

By presenting on the one hand theoretical justifications, results, and interpretations of these two concepts and on the other hand numerical experiments and results of the ReLU singular values and the Gaussian mean width being applied to trained neural networks, we hope to give a comprehensive, singular-value-centric view of ReLU layers.

A Survey on Surrogate Approaches to Non-negative Matrix Factorization

no code implementations6 Aug 2018 Pascal Fernsel, Peter Maass

Motivated by applications in hyperspectral imaging we investigate methods for approximating a high-dimensional non-negative matrix $\mathbf{\mathit{Y}}$ by a product of two lower-dimensional, non-negative matrices $\mathbf{\mathit{K}}$ and $\mathbf{\mathit{X}}.$ This so-called non-negative matrix factorization is based on defining suitable Tikhonov functionals, which combine a discrepancy measure for $\mathbf{\mathit{Y}}\approx\mathbf{\mathit{KX}}$ with penalty terms for enforcing additional properties of $\mathbf{\mathit{K}}$ and $\mathbf{\mathit{X}}$.

Cannot find the paper you are looking for? You can Submit a new open access paper.