Search Results for author: Philipp Petersen

Found 14 papers, 2 papers with code

Large Language Models for Mathematicians

no code implementations7 Dec 2023 Simon Frieder, Julius Berner, Philipp Petersen, Thomas Lukasiewicz

Large language models (LLMs) such as ChatGPT have received immense interest for their general-purpose language understanding and, in particular, their ability to generate high-quality text or computer code.

Optimal learning of high-dimensional classification problems using deep neural networks

no code implementations23 Dec 2021 Philipp Petersen, Felix Voigtlaender

We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity.

Vocal Bursts Intensity Prediction

Deep Microlocal Reconstruction for Limited-Angle Tomography

no code implementations12 Aug 2021 Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem, Philipp Petersen

We present a deep learning-based algorithm to jointly solve a reconstruction problem and a wavefront set extraction problem in tomographic imaging.

Numerical Solution of the Parametric Diffusion Equation by Deep Neural Networks

1 code implementation25 Apr 2020 Moritz Geist, Philipp Petersen, Mones Raslan, Reinhold Schneider, Gitta Kutyniok

Here, approximation theory predicts that the performance of the model should depend only very mildly on the dimension of the parameter space and is determined by the intrinsic dimension of the solution manifold of the parametric partial differential equation.

Approximation in $L^p(μ)$ with deep ReLU neural networks

no code implementations9 Apr 2019 Felix Voigtlaender, Philipp Petersen

In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in $L^2(\mathbb{P})$, with the probability measure $\mathbb{P}$ describing the distribution of the data.

Learning Theory

A Theoretical Analysis of Deep Neural Networks and Parametric PDEs

no code implementations31 Mar 2019 Gitta Kutyniok, Philipp Petersen, Mones Raslan, Reinhold Schneider

We derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations.

Error bounds for approximations with deep ReLU neural networks in $W^{s,p}$ norms

no code implementations21 Feb 2019 Ingo Gühring, Gitta Kutyniok, Philipp Petersen

We analyze approximation rates of deep ReLU neural networks for Sobolev-regular functions with respect to weaker Sobolev norms.

Extraction of digital wavefront sets using applied harmonic analysis and deep neural networks

1 code implementation5 Jan 2019 Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem, Philipp Petersen

Microlocal analysis provides deep insight into singularity structures and is often crucial for solving inverse problems, predominately, in imaging sciences.

Equivalence of approximation by convolutional neural networks and fully-connected networks

no code implementations4 Sep 2018 Philipp Petersen, Felix Voigtlaender

Convolutional neural networks are the most widely used type of neural networks in applications.

Translation

Topological properties of the set of functions generated by neural networks of fixed size

no code implementations22 Jun 2018 Philipp Petersen, Mones Raslan, Felix Voigtlaender

We analyze the topological properties of the set of functions that can be implemented by neural networks of a fixed size.

General Topology Functional Analysis 54H99, 68T05, 52A30

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

no code implementations15 Sep 2017 Philipp Petersen, Felix Voigtlaender

We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in $L^2$.

Optimal Approximation with Sparsely Connected Deep Neural Networks

no code implementations4 May 2017 Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen

Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.

Cannot find the paper you are looking for? You can Submit a new open access paper.