no code implementations • 9 Feb 2024 • Felix Draxler, Stefan Wahl, Christoph Schnörr, Ullrich Köthe
We present a novel theoretical framework for understanding the expressive power of coupling-based normalizing flows such as RealNVP.
1 code implementation • 15 Dec 2023 • Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Ullrich Köthe
Many real world data, particularly in the natural sciences and computer vision, lie on known Riemannian manifolds such as spheres, tori or the group of rotation matrices.
1 code implementation • 25 Oct 2023 • Felix Draxler, Peter Sorrenson, Lea Zimmermann, Armand Rousselot, Ullrich Köthe
Normalizing Flows are generative models that directly maximize the likelihood.
1 code implementation • 23 Jun 2023 • Felix Draxler, Lars Kühmichel, Armand Rousselot, Jens Müller, Christoph Schnörr, Ullrich Köthe
Gaussianization is a simple generative model that can be trained without backpropagation.
2 code implementations • 2 Jun 2023 • Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Lea Zimmermann, Ullrich Köthe
Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data.
1 code implementation • 17 Mar 2023 • Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Köthe
We investigate a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution.
2 code implementations • 25 Oct 2022 • Felix Draxler, Christoph Schnörr, Ullrich Köthe
For the first time, we make a quantitative statement about this kind of convergence: We prove that all coupling-based normalizing flows perform whitening of the data distribution (i. e. diagonalize the covariance matrix) and derive corresponding convergence bounds that show a linear convergence rate in the depth of the flow.
2 code implementations • ICLR 2019 • Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred A. Hamprecht, Yoshua Bengio, Aaron Courville
Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with $100\%$ accuracy.
2 code implementations • ICML 2018 • Felix Draxler, Kambis Veschgini, Manfred Salmhofer, Fred A. Hamprecht
Training neural networks involves finding minima of a high-dimensional non-convex loss function.