Search Results for author: Johannes Lederer

Found 38 papers, 11 papers with code

Benchmarking the Fairness of Image Upsampling Methods

no code implementations24 Jan 2024 Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer

Recent years have witnessed a rapid development of deep generative models for creating synthetic media, such as images and videos.

Benchmarking Fairness

Affine Invariance in Continuous-Domain Convolutional Neural Networks

no code implementations13 Nov 2023 Ali Mohaddes, Johannes Lederer

The notion of group invariance helps neural networks in recognizing patterns and features under geometric transformations.

Set-Membership Inference Attacks using Data Watermarking

no code implementations22 Jun 2023 Mike Laszkiewicz, Denis Lukovnikov, Johannes Lederer, Asja Fischer

In this work, we propose a set-membership inference attack for generative models using deep image watermarking techniques.

Inference Attack Membership Inference Attack

Single-Model Attribution of Generative Models Through Final-Layer Inversion

no code implementations26 May 2023 Mike Laszkiewicz, Jonas Ricker, Johannes Lederer, Asja Fischer

Recent breakthroughs in generative modeling have sparked interest in practical single-model attribution.

Anomaly Detection

Lag selection and estimation of stable parameters for multiple autoregressive processes through convex programming

no code implementations3 Mar 2023 Somnath Chakraborty, Johannes Lederer, Rainer von Sachs

We prove that the estimated process is stable, and we establish rates for the forecasting error that can outmatch the known rate in our setting.

Time Series Time Series Analysis

The DeepCAR Method: Forecasting Time-Series Data That Have Change Points

1 code implementation22 Feb 2023 Ayla Jungbluth, Johannes Lederer

Many methods for time-series forecasting are known in classical statistics, such as autoregression, moving averages, and exponential smoothing.

Time Series Time Series Forecasting

Statistical guarantees for sparse deep learning

no code implementations11 Dec 2022 Johannes Lederer

Neural networks are becoming increasingly popular in applications, but our mathematical understanding of their potential and limitations is still limited.

Marginal Tail-Adaptive Normalizing Flows

1 code implementation21 Jun 2022 Mike Laszkiewicz, Johannes Lederer, Asja Fischer

Learning the tail behavior of a distribution is a notoriously difficult problem.

Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks

no code implementations9 May 2022 Mahsa Taheri, Fang Xie, Johannes Lederer

Since statistical guarantees for neural networks are usually restricted to global optima of intricate objective functions, it is not clear whether these theories really explain the performances of actual outputs of neural-network pipelines.

VC-PCR: A Prediction Method based on Supervised Variable Selection and Clustering

no code implementations2 Feb 2022 Rebecca Marion, Johannes Lederer, Bernadette Govaerts, Rainer von Sachs

Sparse linear prediction methods suffer from decreased prediction accuracy when the predictor variables have cluster structure (e. g. there are highly correlated groups of variables).

Clustering regression +1

Depth Normalization of Small RNA Sequencing: Using Data and Biology to Select a Suitable Method

2 code implementations13 Jan 2022 Yannick Düren, Johannes Lederer, Li-Xuan Qin

To address this problem, we developed "DANA" - an approach for assessing the performance of normalization methods for microRNA sequencing data based on biology-motivated and data-driven metrics.

Copula-Based Normalizing Flows

1 code implementation ICML Workshop INNF 2021 Mike Laszkiewicz, Johannes Lederer, Asja Fischer

Normalizing flows, which learn a distribution by transforming the data to samples from a Gaussian base distribution, have proven powerful density approximations.

Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks

no code implementations4 Jun 2021 Leni Ven, Johannes Lederer

Deep learning requires several design choices, such as the nodes' activation functions and the widths, types, and arrangements of the layers.

Vocal Bursts Type Prediction

Targeted Deep Learning: Framework, Methods, and Applications

no code implementations28 May 2021 Shih-Ting Huang, Johannes Lederer

In this paper, we introduce a framework for targeted deep learning, and we devise and test an approach for adapting standard pipelines to the requirements of targeted deep learning.

DeepMoM: Robust Deep Learning With Median-of-Means

no code implementations28 May 2021 Shih-Ting Huang, Johannes Lederer

In contrast, the arguably much more common case of corruption that reflects the limited quality of data has been studied much less.

Activation Functions in Artificial Neural Networks: A Systematic Overview

no code implementations25 Jan 2021 Johannes Lederer

Activation functions shape the outputs of artificial neurons and, therefore, are integral parts of neural networks in general and deep learning in particular.

Connection- and Node-Sparse Deep Learning: Statistical Guarantees

no code implementations1 Jan 2021 Johannes Lederer

Neural networks are becoming increasingly popular in applications, but a comprehensive mathematical understanding of their potentials and limitations is still missing.

Optimization Landscapes of Wide Deep Neural Networks Are Benign

no code implementations2 Oct 2020 Johannes Lederer

We analyze the optimization landscapes of deep learning with wide networks.

No Spurious Local Minima: on the Optimization Landscapes of Wide and Deep Neural Networks

no code implementations28 Sep 2020 Johannes Lederer

Empirical studies suggest that wide neural networks are comparably easy to optimize, but mathematical support for this observation is scarce.

Risk Bounds for Robust Deep Learning

no code implementations14 Sep 2020 Johannes Lederer

It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data.

Is there a role for statistics in artificial intelligence?

no code implementations13 Sep 2020 Sarah Friedrich, Gerd Antes, Sigrid Behr, Harald Binder, Werner Brannath, Florian Dumpert, Katja Ickstadt, Hans Kestler, Johannes Lederer, Heinz Leitgöb, Markus Pauly, Ansgar Steland, Adalbert Wilhelm, Tim Friede

The research on and application of artificial intelligence (AI) has triggered a comprehensive scientific, economic, social and political discussion.

Layer Sparsity in Neural Networks

no code implementations28 Jun 2020 Mohamed Hebiri, Johannes Lederer

Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting.

BIG-bench Machine Learning

Statistical Guarantees for Regularized Neural Networks

no code implementations30 May 2020 Mahsa Taheri, Fang Xie, Johannes Lederer

Neural networks have become standard tools in the analysis of data, but they lack comprehensive mathematical theories.

Thresholded Adaptive Validation: Tuning the Graphical Lasso for Graph Recovery

1 code implementation1 May 2020 Mike Laszkiewicz, Asja Fischer, Johannes Lederer

Many Machine Learning algorithms are formulated as regularized optimization problems, but their performance hinges on a regularization parameter that needs to be calibrated to each application at hand.

Tuning parameter calibration for prediction in personalized medicine

1 code implementation23 Sep 2019 Shih-Ting Huang, Yannick Düren, Kristoffer H. Hellton, Johannes Lederer

Personalized medicine has become an important part of medicine, for instance predicting individual drug responses based on genomic information.

regression

Aggregating Knockoffs for False Discovery Rate Control with an Application to Gut Microbiome Data

1 code implementation8 Jul 2019 Fang Xie, Johannes Lederer

We support our method both in theory and simulations, and we show that it can lead to new discoveries on microbiome data from the American Gut Project.

Methodology Quantitative Methods Applications

False Discovery Rates in Biological Networks

1 code implementation8 Jul 2019 Lu Yu, Tobias Kaufmann, Johannes Lederer

The increasing availability of data has generated unprecedented prospects for network analyses in many biological fields, such as neuroscience (e. g., brain networks), genomics (e. g., gene-gene interaction networks), and ecology (e. g., species interaction networks).

Methodology Quantitative Methods Applications

Maximum Regularized Likelihood Estimators: A General Prediction Theory and Applications

no code implementations9 Oct 2017 Rui Zhuang, Johannes Lederer

Maximum regularized likelihood estimators (MRLEs) are arguably the most established class of estimators in high-dimensional statistics.

regression

Integrating Additional Knowledge Into Estimation of Graphical Models

1 code implementation10 Apr 2017 Yunqi Bu, Johannes Lederer

In applications of graphical models, we typically have more information than just the samples themselves.

Tuning parameter calibration for $\ell_1$-regularized logistic regression

no code implementations1 Oct 2016 Wei Li, Johannes Lederer

Feature selection is a standard approach to understanding and modeling high-dimensional classification data, but the corresponding statistical methods hinge on tuning parameters that are difficult to calibrate.

feature selection General Classification +1

Oracle Inequalities for High-dimensional Prediction

no code implementations1 Aug 2016 Johannes Lederer, Lu Yu, Irina Gaynanova

The abundance of high-dimensional data in the modern sciences has generated tremendous interest in penalized estimators such as the lasso, scaled lasso, square-root lasso, elastic net, and many others.

Vocal Bursts Intensity Prediction

Non-convex Global Minimization and False Discovery Rate Control for the TREX

1 code implementation22 Apr 2016 Jacob Bien, Irina Gaynanova, Johannes Lederer, Christian Müller

The TREX is a recently introduced method for performing sparse high-dimensional regression.

Topology Adaptive Graph Estimation in High Dimensions

no code implementations27 Oct 2014 Johannes Lederer, Christian Müller

We introduce Graphical TREX (GTREX), a novel method for graph estimation in high-dimensional Gaussian graphical models.

Vocal Bursts Intensity Prediction

On the Prediction Performance of the Lasso

no code implementations7 Feb 2014 Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer

Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.