Search Results for author: Henning Petzka

Found 8 papers, 4 papers with code

FAM: Relative Flatness Aware Minimization

1 code implementation5 Jul 2023 Linara Adilova, Amr Abourayya, Jianning Li, Amin Dada, Henning Petzka, Jan Egger, Jens Kleesiek, Michael Kamp

Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse - certain reparameterizations of a neural network change most flatness measures but do not change generalization.

Discriminating Against Unrealistic Interpolations in Generative Adversarial Networks

1 code implementation2 Mar 2022 Henning Petzka, Ted Kronvall, Cristian Sminchisescu

By reusing the discriminator network to modify the metric on the latent space, we propose a lightweight solution for improved interpolations in pre-trained GANs.

TropEx: An Algorithm for Extracting Linear Terms in Deep Neural Networks

no code implementations ICLR 2021 Martin Trimmel, Henning Petzka, Cristian Sminchisescu

Deep neural networks with rectified linear (ReLU) activations are piecewise linear functions, where hyperplanes partition the input space into an astronomically high number of linear regions.

Relative Flatness and Generalization

1 code implementation NeurIPS 2021 Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley

Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks.

Generalization Bounds

A Reparameterization-Invariant Flatness Measure for Deep Neural Networks

no code implementations29 Nov 2019 Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu

The performance of deep neural networks is often attributed to their automated, task-related feature construction.

Open-Ended Question Answering

Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks

no code implementations25 Sep 2019 Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu

With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.

Open-Ended Question Answering

Non-attracting Regions of Local Minima in Deep and Wide Neural Networks

no code implementations16 Dec 2018 Henning Petzka, Cristian Sminchisescu

For extremely wide neural networks of decreasing width after the wide layer, we prove that every suboptimal local minimum belongs to such a connected set.

On the regularization of Wasserstein GANs

2 code implementations ICLR 2018 Henning Petzka, Asja Fischer, Denis Lukovnicov

Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data.

Cannot find the paper you are looking for? You can Submit a new open access paper.