1 code implementation • 28 Oct 2023 • Derick Nganyu Tanyu, Jianfeng Ning, Andreas Hauptmann, Bangti Jin, Peter Maass
A suite of performance metrics is employed to assess the efficacy of these methods.
no code implementations • 28 Aug 2023 • Riccardo Barbano, Alexander Denker, Hyungjin Chung, Tae Hoon Roh, Simon Arrdige, Peter Maass, Bangti Jin, Jong Chul Ye
Denoising diffusion models have emerged as the go-to framework for solving inverse problems in imaging.
1 code implementation • 27 Aug 2023 • Imraj RD Singh, Alexander Denker, Riccardo Barbano, Željko Kereta, Bangti Jin, Kris Thielemans, Peter Maass, Simon Arridge
Score-based generative models have demonstrated highly promising results for medical image reconstruction tasks in magnetic resonance imaging or computed tomography.
1 code implementation • 28 Mar 2023 • Marco Nittscher, Michael Lameter, Riccardo Barbano, Johannes Leuschner, Bangti Jin, Peter Maass
The deep image prior (DIP) is a well-established unsupervised deep learning method for image reconstruction; yet it is far from being flawless.
no code implementations • 4 Feb 2023 • Rudolf Herdt, Maximilian Schmidt, Daniel Otero Baguer, Jean Le'Clerc Arrastia, Peter Maass
In this work, we propose a fast and accurate method to reconstruct activations of classification and semantic segmentation networks by stitching them with a GAN generator utilizing a 1x1 convolution.
1 code implementation • 6 Dec 2022 • Derick Nganyu Tanyu, Jianfeng Ning, Tom Freudenberg, Nick Heilenkötter, Andreas Rademacher, Uwe Iben, Peter Maass
Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics.
no code implementations • 12 Sep 2022 • Sören Dittmer, David Erzmann, Henrik Harms, Peter Maass
Recent developments in Deep Learning (DL) suggest a vast potential for Topology Optimization (TO).
1 code implementation • 24 May 2022 • Fabian Altekrüger, Alexander Denker, Paul Hagemann, Johannes Hertrich, Peter Maass, Gabriele Steidl
Learning neural networks using only few available information is an important ongoing research topic with tremendous potential for applications.
2 code implementations • 26 Oct 2021 • Alexander Denker, Maximilian Schmidt, Johannes Leuschner, Peter Maass
Over the last years, deep learning methods have become an increasingly popular choice to solve tasks from the field of inverse problems.
1 code implementation • 3 Jul 2020 • Sören Dittmer, Carola-Bibiane Schönlieb, Peter Maass
We present a learned unsupervised denoising method for arbitrary types of data, which we explore on images and one-dimensional signals.
no code implementations • 3 Jul 2020 • Sören Dittmer, Tobias Kluth, Mads Thorstein Roar Henriksen, Peter Maass
Magnetic particle imaging (MPI) is an imaging modality exploiting the nonlinear magnetization behavior of (super-)paramagnetic nanoparticles to obtain a space- and often also time-dependent concentration of a tracer consisting of these nanoparticles.
no code implementations • 10 Dec 2019 • Christian Etmann, Maximilian Schmidt, Jens Behrmann, Tobias Boskamp, Lena Hauberg-Lotte, Annette Peter, Rita Casadonte, Jörg Kriegsmann, Peter Maass
Neural networks have recently been established as a viable classification method for imaging mass spectrometry data for tumor typing.
1 code implementation • 10 Jul 2019 • Sören Dittmer, Peter Maass
Recently the field of inverse problems has seen a growing usage of mathematically only partially understood learned and non-learned priors.
1 code implementation • 10 May 2019 • Christian Etmann, Sebastian Lunz, Peter Maass, Carola-Bibiane Schönlieb
Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts.
no code implementations • ICLR 2019 • Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maass
We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping.
2 code implementations • 10 Dec 2018 • Sören Dittmer, Tobias Kluth, Peter Maass, Daniel Otero Baguer
The present paper studies so-called deep image prior (DIP) techniques in the context of ill-posed inverse problems.
no code implementations • 6 Dec 2018 • Sören Dittmer, Emily J. King, Peter Maass
By presenting on the one hand theoretical justifications, results, and interpretations of these two concepts and on the other hand numerical experiments and results of the ReLU singular values and the Gaussian mean width being applied to trained neural networks, we hope to give a comprehensive, singular-value-centric view of ReLU layers.
no code implementations • 6 Aug 2018 • Pascal Fernsel, Peter Maass
Motivated by applications in hyperspectral imaging we investigate methods for approximating a high-dimensional non-negative matrix $\mathbf{\mathit{Y}}$ by a product of two lower-dimensional, non-negative matrices $\mathbf{\mathit{K}}$ and $\mathbf{\mathit{X}}.$ This so-called non-negative matrix factorization is based on defining suitable Tikhonov functionals, which combine a discrepancy measure for $\mathbf{\mathit{Y}}\approx\mathbf{\mathit{KX}}$ with penalty terms for enforcing additional properties of $\mathbf{\mathit{K}}$ and $\mathbf{\mathit{X}}$.
no code implementations • 2 May 2017 • Jens Behrmann, Christian Etmann, Tobias Boskamp, Rita Casadonte, Jörg Kriegsmann, Peter Maass
Deep learning offers an approach to learn feature extraction and classification combined in a single model.