Search Results for author: Felix Draxler

Found 9 papers, 8 papers with code

On the Universality of Coupling-based Normalizing Flows

no code implementations9 Feb 2024 Felix Draxler, Stefan Wahl, Christoph Schnörr, Ullrich Köthe

We present a novel theoretical framework for understanding the expressive power of coupling-based normalizing flows such as RealNVP.

Learning Distributions on Manifolds with Free-form Flows

1 code implementation15 Dec 2023 Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Ullrich Köthe

Many real world data, particularly in the natural sciences and computer vision, lie on known Riemannian manifolds such as spheres, tori or the group of rotation matrices.

Lifting Architectural Constraints of Injective Flows

2 code implementations2 Jun 2023 Peter Sorrenson, Felix Draxler, Armand Rousselot, Sander Hummerich, Lea Zimmermann, Ullrich Köthe

Normalizing Flows explicitly maximize a full-dimensional likelihood on the training data.

Finding Competence Regions in Domain Generalization

1 code implementation17 Mar 2023 Jens Müller, Stefan T. Radev, Robert Schmier, Felix Draxler, Carsten Rother, Ullrich Köthe

We investigate a "learning to reject" framework to address the problem of silent failures in Domain Generalization (DG), where the test distribution differs from the training distribution.

Domain Generalization

Whitening Convergence Rate of Coupling-based Normalizing Flows

2 code implementations25 Oct 2022 Felix Draxler, Christoph Schnörr, Ullrich Köthe

For the first time, we make a quantitative statement about this kind of convergence: We prove that all coupling-based normalizing flows perform whitening of the data distribution (i. e. diagonalize the covariance matrix) and derive corresponding convergence bounds that show a linear convergence rate in the depth of the flow.

On the Spectral Bias of Neural Networks

2 code implementations ICLR 2019 Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred A. Hamprecht, Yoshua Bengio, Aaron Courville

Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with $100\%$ accuracy.

Essentially No Barriers in Neural Network Energy Landscape

2 code implementations ICML 2018 Felix Draxler, Kambis Veschgini, Manfred Salmhofer, Fred A. Hamprecht

Training neural networks involves finding minima of a high-dimensional non-convex loss function.

Cannot find the paper you are looking for? You can Submit a new open access paper.