Search Results for author: Dominik Stöger

Found 9 papers, 0 papers with code

Upper and lower bounds for the Lipschitz constant of random neural networks

no code implementations2 Nov 2023 Paul Geuchen, Thomas Heindl, Dominik Stöger, Felix Voigtlaender

Empirical studies have widely demonstrated that neural networks are highly sensitive to small, adversarial perturbations of the input.

Implicit Balancing and Regularization: Generalization and Convergence Guarantees for Overparameterized Asymmetric Matrix Sensing

no code implementations24 Mar 2023 Mahdi Soltanolkotabi, Dominik Stöger, Changzhi Xie

We show that in this setting, factorized gradient descent enjoys two implicit properties: (1) coupling of the trajectory of gradient descent where the factors are coupled in various ways throughout the gradient update trajectory and (2) an algorithmic regularization property where the iterates show a propensity towards low-rank models despite the overparameterized nature of the factorized model.

How robust is randomized blind deconvolution via nuclear norm minimization against adversarial noise?

no code implementations17 Mar 2023 Julia Kostin, Felix Krahmer, Dominik Stöger

Reformulation of blind deconvolution as a low-rank recovery problem has led to multiple theoretical recovery guarantees in the past decade due to the success of the nuclear norm minimization heuristic.

Randomly Initialized Alternating Least Squares: Fast Convergence for Matrix Sensing

no code implementations25 Apr 2022 Kiryung Lee, Dominik Stöger

In this paper, we show that ALS with random initialization converges to the true solution with $\varepsilon$-accuracy in $O(\log n + \log (1/\varepsilon)) $ iterations using only a near-optimal amount of samples, where we assume the measurement matrices to be i. i. d.

Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction

no code implementations NeurIPS 2021 Dominik Stöger, Mahdi Soltanolkotabi

Recently there has been significant theoretical progress on understanding the convergence and generalization of gradient-based methods on nonconvex losses with overparameterized models.

Understanding Overparameterization in Generative Adversarial Networks

no code implementations12 Apr 2021 Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi

We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.

Understanding Over-parameterization in Generative Adversarial Networks

no code implementations ICLR 2021 Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi

In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.

On the convex geometry of blind deconvolution and matrix completion

no code implementations28 Feb 2019 Felix Krahmer, Dominik Stöger

We find that for both these applications the dimension factors in the noise bounds are not an artifact of the proof, but the problems are intrinsically badly conditioned.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.