Search Results for author: Martin Genzel

Found 13 papers, 5 papers with code

Let's Enhance: A Deep Learning Approach to Extreme Deblurring of Text Images

1 code implementation18 Nov 2022 Theophil Trippe, Martin Genzel, Jan Macdonald, Maximilian März

This work presents a novel deep-learning-based pipeline for the inverse problem of image deblurring, leveraging augmentation and pre-training with synthetic data.

Deblurring Image Deblurring +1

Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning

1 code implementation14 Jun 2022 Martin Genzel, Ingo Gühring, Jan Macdonald, Maximilian März

This work is concerned with the following fundamental question in scientific machine learning: Can deep-learning-based methods solve noise-free inverse problems to near-perfect accuracy?

Computed Tomography (CT)

Gradient-Based Learning of Discrete Structured Measurement Operators for Signal Recovery

1 code implementation7 Feb 2022 Jonathan Sauder, Martin Genzel, Peter Jung

Countless signal processing applications include the reconstruction of signals from few indirect linear measurements.

Near-Exact Recovery for Sparse-View CT via Data-Driven Methods

no code implementations NeurIPS Workshop Deep_Invers 2021 Martin Genzel, Ingo Gühring, Jan Macdonald, Maximilian März

This work presents an empirical study on the design and training of iterative neural networks for image reconstruction from tomographic measurements with unknown geometry.

Image Reconstruction

Learning Structured Sparse Matrices for Signal Recovery via Unrolled Optimization

no code implementations NeurIPS Workshop Deep_Invers 2021 Jonathan Sauder, Martin Genzel, Peter Jung

Countless signal processing applications include the reconstruction of an unknown signal from very few indirect linear measurements.

The Separation Capacity of Random Neural Networks

no code implementations31 Jul 2021 Sjoerd Dirksen, Martin Genzel, Laurent Jacques, Alexander Stollenwerk

Neural networks with random weights appear in a variety of machine learning applications, most prominently as the initialization of many deep learning algorithms and as a computationally cheap alternative to fully learned neural networks.

Memorization

AAPM DL-Sparse-View CT Challenge Submission Report: Designing an Iterative Network for Fanbeam-CT with Unknown Geometry

1 code implementation1 Jun 2021 Martin Genzel, Jan Macdonald, Maximilian März

This report is dedicated to a short motivation and description of our contribution to the AAPM DL-Sparse-View CT Challenge (team name: "robust-and-stable").

Solving Inverse Problems With Deep Neural Networks -- Robustness Included?

1 code implementation9 Nov 2020 Martin Genzel, Jan Macdonald, Maximilian März

In the past five years, deep learning methods have become state-of-the-art in solving various inverse problems.

Image Reconstruction

A Unified Approach to Uniform Signal Recovery From Non-Linear Observations

no code implementations19 Sep 2020 Martin Genzel, Alexander Stollenwerk

An important characteristic of associated guarantees is uniformity, i. e., recovery succeeds for an entire class of structured signals with a fixed measurement ensemble.

Information Theory Information Theory Statistics Theory Statistics Theory

Generic Error Bounds for the Generalized Lasso with Sub-Exponential Data

no code implementations11 Apr 2020 Martin Genzel, Christian Kipp

This work performs a non-asymptotic analysis of the generalized Lasso under the assumption of sub-exponential data.

Retrieval

The Mismatch Principle: The Generalized Lasso Under Large Model Uncertainties

no code implementations20 Aug 2018 Martin Genzel, Gitta Kutyniok

We study the estimation capacity of the generalized Lasso, i. e., least squares minimization combined with a (convex) structural constraint.

Learning Theory Variable Selection

A Mathematical Framework for Feature Selection from Real-World Data with Non-Linear Observations

no code implementations31 Aug 2016 Martin Genzel, Gitta Kutyniok

In this paper, we study the challenge of feature selection based on a relatively small collection of sample pairs $\{(x_i, y_i)\}_{1 \leq i \leq m}$.

Variable Selection

Sparse Proteomics Analysis - A compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data

no code implementations11 Jun 2015 Tim Conrad, Martin Genzel, Nada Cvetkovic, Niklas Wulkow, Alexander Leichtle, Jan Vybiral, Gitta Kutyniok, Christof Schütte

Results: We present a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.