1 code implementation • 5 Jul 2023 • Linara Adilova, Amr Abourayya, Jianning Li, Amin Dada, Henning Petzka, Jan Egger, Jens Kleesiek, Michael Kamp
Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse - certain reparameterizations of a neural network change most flatness measures but do not change generalization.
1 code implementation • 2 Mar 2022 • Henning Petzka, Ted Kronvall, Cristian Sminchisescu
By reusing the discriminator network to modify the metric on the latent space, we propose a lightweight solution for improved interpolations in pre-trained GANs.
no code implementations • ICLR 2021 • Martin Trimmel, Henning Petzka, Cristian Sminchisescu
Deep neural networks with rectified linear (ReLU) activations are piecewise linear functions, where hyperplanes partition the input space into an astronomically high number of linear regions.
1 code implementation • NeurIPS 2021 • Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley
Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks.
no code implementations • 29 Nov 2019 • Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu
The performance of deep neural networks is often attributed to their automated, task-related feature construction.
no code implementations • 25 Sep 2019 • Henning Petzka, Linara Adilova, Michael Kamp, Cristian Sminchisescu
With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure.
no code implementations • 16 Dec 2018 • Henning Petzka, Cristian Sminchisescu
For extremely wide neural networks of decreasing width after the wide layer, we prove that every suboptimal local minimum belongs to such a connected set.
2 code implementations • ICLR 2018 • Henning Petzka, Asja Fischer, Denis Lukovnicov
Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data.