no code implementations • 7 Dec 2023 • Simon Frieder, Julius Berner, Philipp Petersen, Thomas Lukasiewicz
Large language models (LLMs) such as ChatGPT have received immense interest for their general-purpose language understanding and, in particular, their ability to generate high-quality text or computer code.
no code implementations • 23 Dec 2021 • Philipp Petersen, Felix Voigtlaender
We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity.
no code implementations • 12 Aug 2021 • Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem, Philipp Petersen
We present a deep learning-based algorithm to jointly solve a reconstruction problem and a wavefront set extraction problem in tomographic imaging.
no code implementations • 9 May 2021 • Julius Berner, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
We describe the new field of mathematical analysis of deep learning.
no code implementations • 18 Nov 2020 • Andrei Caragea, Philipp Petersen, Felix Voigtlaender
We prove bounds for the approximation and estimation of certain binary classification functions using ReLU neural networks.
1 code implementation • 25 Apr 2020 • Moritz Geist, Philipp Petersen, Mones Raslan, Reinhold Schneider, Gitta Kutyniok
Here, approximation theory predicts that the performance of the model should depend only very mildly on the dimension of the parameter space and is determined by the intrinsic dimension of the solution manifold of the parametric partial differential equation.
no code implementations • 9 Apr 2019 • Felix Voigtlaender, Philipp Petersen
In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L^2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.
no code implementations • 31 Mar 2019 • Gitta Kutyniok, Philipp Petersen, Mones Raslan, Reinhold Schneider
We derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations.
no code implementations • 21 Feb 2019 • Ingo Gühring, Gitta Kutyniok, Philipp Petersen
We analyze approximation rates of deep ReLU neural networks for Sobolev-regular functions with respect to weaker Sobolev norms.
1 code implementation • 5 Jan 2019 • Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem, Philipp Petersen
Microlocal analysis provides deep insight into singularity structures and is often crucial for solving inverse problems, predominately, in imaging sciences.
no code implementations • 4 Sep 2018 • Philipp Petersen, Felix Voigtlaender
Convolutional neural networks are the most widely used type of neural networks in applications.
no code implementations • 22 Jun 2018 • Philipp Petersen, Mones Raslan, Felix Voigtlaender
We analyze the topological properties of the set of functions that can be implemented by neural networks of a fixed size.
General Topology Functional Analysis 54H99, 68T05, 52A30
no code implementations • 15 Sep 2017 • Philipp Petersen, Felix Voigtlaender
We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in $L^2$.
no code implementations • 4 May 2017 • Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.