Search Results for author: Felix Voigtlaender

Found 14 papers, 1 papers with code

Upper and lower bounds for the Lipschitz constant of random neural networks

no code implementations2 Nov 2023 Paul Geuchen, Thomas Heindl, Dominik Stöger, Felix Voigtlaender

Empirical studies have widely demonstrated that neural networks are highly sensitive to small, adversarial perturbations of the input.

$L^p$ sampling numbers for the Fourier-analytic Barron space

no code implementations16 Aug 2022 Felix Voigtlaender

In this paper, we consider Barron functions $f : [0, 1]^d \to \mathbb{R}$ of smoothness $\sigma > 0$, which are functions that can be written as \[ f(x) = \int_{\mathbb{R}^d} F(\xi) \, e^{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}^d} |F(\xi)| \cdot (1 + |\xi|)^{\sigma} \, d \xi < \infty.

Learning ReLU networks to high uniform accuracy is intractable

1 code implementation26 May 2022 Julius Berner, Philipp Grohs, Felix Voigtlaender

Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class.

Learning Theory Vocal Bursts Intensity Prediction

Optimal learning of high-dimensional classification problems using deep neural networks

no code implementations23 Dec 2021 Philipp Petersen, Felix Voigtlaender

We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity.

Vocal Bursts Intensity Prediction

Sobolev-type embeddings for neural network approximation spaces

no code implementations28 Oct 2021 Philipp Grohs, Felix Voigtlaender

We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in $L^p$) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers.

Vocal Bursts Type Prediction

Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces

no code implementations6 Apr 2021 Philipp Grohs, Felix Voigtlaender

Such algorithms (most prominently stochastic gradient descent and its variants) are used extensively in the field of deep learning.

The universal approximation theorem for complex-valued neural networks

no code implementations6 Dec 2020 Felix Voigtlaender

We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks.

Phase Transitions in Rate Distortion Theory and Deep Learning

no code implementations3 Aug 2020 Philipp Grohs, Andreas Klotz, Felix Voigtlaender

We also provide quantitative and non-asymptotic bounds on the probability that a random $f\in\mathcal{S}$ can be encoded to within accuracy $\varepsilon$ using $R$ bits.

Approximation in $L^p(μ)$ with deep ReLU neural networks

no code implementations9 Apr 2019 Felix Voigtlaender, Philipp Petersen

In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L^2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.

Learning Theory

Equivalence of approximation by convolutional neural networks and fully-connected networks

no code implementations4 Sep 2018 Philipp Petersen, Felix Voigtlaender

Convolutional neural networks are the most widely used type of neural networks in applications.

Translation

Topological properties of the set of functions generated by neural networks of fixed size

no code implementations22 Jun 2018 Philipp Petersen, Mones Raslan, Felix Voigtlaender

We analyze the topological properties of the set of functions that can be implemented by neural networks of a fixed size.

General Topology Functional Analysis 54H99, 68T05, 52A30

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

no code implementations15 Sep 2017 Philipp Petersen, Felix Voigtlaender

We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in $L^2$.

Cannot find the paper you are looking for? You can Submit a new open access paper.