no code implementations • 2 Nov 2023 • Paul Geuchen, Thomas Heindl, Dominik Stöger, Felix Voigtlaender
Empirical studies have widely demonstrated that neural networks are highly sensitive to small, adversarial perturbations of the input.
no code implementations • 16 Aug 2022 • Felix Voigtlaender
In this paper, we consider Barron functions $f : [0, 1]^d \to \mathbb{R}$ of smoothness $\sigma > 0$, which are functions that can be written as \[ f(x) = \int_{\mathbb{R}^d} F(\xi) \, e^{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}^d} |F(\xi)| \cdot (1 + |\xi|)^{\sigma} \, d \xi < \infty.
1 code implementation • 26 May 2022 • Julius Berner, Philipp Grohs, Felix Voigtlaender
Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class.
no code implementations • 23 Dec 2021 • Philipp Petersen, Felix Voigtlaender
We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity.
no code implementations • 28 Oct 2021 • Philipp Grohs, Felix Voigtlaender
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in $L^p$) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers.
no code implementations • 6 Apr 2021 • Philipp Grohs, Felix Voigtlaender
Such algorithms (most prominently stochastic gradient descent and its variants) are used extensively in the field of deep learning.
no code implementations • 6 Dec 2020 • Felix Voigtlaender
We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks.
no code implementations • 18 Nov 2020 • Andrei Caragea, Philipp Petersen, Felix Voigtlaender
We prove bounds for the approximation and estimation of certain binary classification functions using ReLU neural networks.
no code implementations • 3 Aug 2020 • Philipp Grohs, Andreas Klotz, Felix Voigtlaender
We also provide quantitative and non-asymptotic bounds on the probability that a random $f\in\mathcal{S}$ can be encoded to within accuracy $\varepsilon$ using $R$ bits.
no code implementations • 3 May 2019 • Rémi Gribonval, Gitta Kutyniok, Morten Nielsen, Felix Voigtlaender
We study the expressivity of deep neural networks.
no code implementations • 9 Apr 2019 • Felix Voigtlaender, Philipp Petersen
In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L^2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.
no code implementations • 4 Sep 2018 • Philipp Petersen, Felix Voigtlaender
Convolutional neural networks are the most widely used type of neural networks in applications.
no code implementations • 22 Jun 2018 • Philipp Petersen, Mones Raslan, Felix Voigtlaender
We analyze the topological properties of the set of functions that can be implemented by neural networks of a fixed size.
General Topology Functional Analysis 54H99, 68T05, 52A30
no code implementations • 15 Sep 2017 • Philipp Petersen, Felix Voigtlaender
We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in $L^2$.