Search Results for author: Philipp Grohs

Found 32 papers, 8 papers with code

Sampling Complexity of Deep Approximation Spaces

no code implementations20 Dec 2023 Ahmed Abdeljawad, Philipp Grohs

While it is well-known that neural networks enjoy excellent approximation capabilities, it remains a big challenge to compute such approximations from point samples.

Variational Monte Carlo on a Budget -- Fine-tuning pre-trained Neural Wavefunctions

1 code implementation15 Jul 2023 Michael Scherbela, Leon Gerard, Philipp Grohs

Obtaining accurate solutions to the Schr\"odinger equation is the key challenge in computational quantum chemistry.

Variational Monte Carlo

FakET: Simulating Cryo-Electron Tomograms with Neural Style Transfer

1 code implementation4 Apr 2023 Pavol Harar, Lukas Herrmann, Philipp Grohs, David Haselbach

A key shortcoming of these supervised learning methods is their need for large training data sets, typically generated from particle models in conjunction with complex numerical forward models simulating the physics of transmission electron microscopes.

Data Augmentation Style Transfer

Towards a Foundation Model for Neural Network Wavefunctions

4 code implementations17 Mar 2023 Michael Scherbela, Leon Gerard, Philipp Grohs

Furthermore, we provide ample experimental evidence to support the idea that extensive pre-training of a such a generalized wavefunction model across different compounds and geometries could lead to a foundation wavefunction model.

Variational Monte Carlo

Learning ReLU networks to high uniform accuracy is intractable

1 code implementation26 May 2022 Julius Berner, Philipp Grohs, Felix Voigtlaender

Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class.

Learning Theory Vocal Bursts Intensity Prediction

Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need?

2 code implementations19 May 2022 Leon Gerard, Michael Scherbela, Philipp Marquetand, Philipp Grohs

Finding accurate solutions to the Schr\"odinger equation is the key unsolved challenge of computational chemistry.

Integral representations of shallow neural network with Rectified Power Unit activation function

no code implementations20 Dec 2021 Ahmed Abdeljawad, Philipp Grohs

In this effort, we derive a formula for the integral representation of a shallow neural network with the Rectified Power Unit activation function.

Sobolev-type embeddings for neural network approximation spaces

no code implementations28 Oct 2021 Philipp Grohs, Felix Voigtlaender

We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in $L^p$) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers.

Vocal Bursts Type Prediction

Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces

no code implementations6 Apr 2021 Philipp Grohs, Felix Voigtlaender

Such algorithms (most prominently stochastic gradient descent and its variants) are used extensively in the field of deep learning.

Deep neural network approximation for high-dimensional parabolic Hamilton-Jacobi-Bellman equations

no code implementations9 Mar 2021 Philipp Grohs, Lukas Herrmann

The approximation of solutions to second order Hamilton--Jacobi--Bellman (HJB) equations by deep neural networks is investigated.

Approximations with deep neural networks in Sobolev time-space

no code implementations23 Dec 2020 Ahmed Abdeljawad, Philipp Grohs

Solutions of evolution equation generally lies in certain Bochner-Sobolev spaces, in which the solution may has regularity and integrability properties for the time variable that can be different for the space variables.

Numerically Solving Parametric Families of High-Dimensional Kolmogorov Partial Differential Equations via Deep Learning

1 code implementation NeurIPS 2020 Julius Berner, Markus Dablander, Philipp Grohs

We show that a single deep neural network trained on simulated data is capable of learning the solution functions of an entire family of PDEs on a full space-time region.

Phase Transitions in Rate Distortion Theory and Deep Learning

no code implementations3 Aug 2020 Philipp Grohs, Andreas Klotz, Felix Voigtlaender

We also provide quantitative and non-asymptotic bounds on the probability that a random $f\in\mathcal{S}$ can be encoded to within accuracy $\varepsilon$ using $R$ bits.

Deep neural network approximation for high-dimensional elliptic PDEs with boundary conditions

no code implementations10 Jul 2020 Philipp Grohs, Lukas Herrmann

In recent work it has been established that deep neural networks are capable of approximating solutions to a large class of parabolic partial differential equations without incurring the curse of dimension.

Uniform error estimates for artificial neural network approximations for heat equations

no code implementations20 Nov 2019 Lukas Gonon, Philipp Grohs, Arnulf Jentzen, David Kofler, David Šiška

These mathematical results from the scientific literature prove in part that algorithms based on ANNs are capable of overcoming the curse of dimensionality in the numerical approximation of high-dimensional PDEs.

Deep neural network approximations for Monte Carlo algorithms

1 code implementation28 Aug 2019 Philipp Grohs, Arnulf Jentzen, Diyora Salimova

One key argument in most of these results is, first, to use a Monte Carlo approximation scheme which can approximate the solution of the PDE under consideration at a fixed space-time point without the curse of dimensionality and, thereafter, to prove that DNNs are flexible enough to mimic the behaviour of the used approximation scheme.

Space-time error estimates for deep neural network approximations for differential equations

no code implementations11 Aug 2019 Philipp Grohs, Fabian Hornung, Arnulf Jentzen, Philipp Zimmermann

It is the subject of the main result of this article to provide space-time error estimates for DNN approximations of Euler approximations of certain perturbed differential equations.

Image Classification speech-recognition +1

How degenerate is the parametrization of neural networks with the ReLU activation function?

no code implementations NeurIPS 2019 Julius Berner, Dennis Elbrächter, Philipp Grohs

Approximation capabilities of neural networks can be used to deal with the latter non-convexity, which allows us to establish that for sufficiently large networks local minima of a regularized optimization problem on the realization space are almost optimal.

Towards a regularity theory for ReLU networks -- chain rule and global error estimates

no code implementations13 May 2019 Julius Berner, Dennis Elbrächter, Philipp Grohs, Arnulf Jentzen

Although for neural networks with locally Lipschitz continuous activation functions the classical derivative exists almost everywhere, the standard chain rule is in general not applicable.

Deep Neural Network Approximation Theory

no code implementations8 Jan 2019 Dennis Elbrächter, Dmytro Perekrestenko, Philipp Grohs, Helmut Bölcskei

This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data.

Handwritten Digit Recognition Image Classification +1

Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations

no code implementations9 Sep 2018 Julius Berner, Philipp Grohs, Arnulf Jentzen

It can be concluded that ERM over deep neural network hypothesis classes overcomes the curse of dimensionality for the numerical solution of linear Kolmogorov equations with affine coefficients.

A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations

no code implementations7 Sep 2018 Philipp Grohs, Fabian Hornung, Arnulf Jentzen, Philippe von Wurstemberger

Such numerical simulations suggest that ANNs have the capacity to very efficiently approximate high-dimensional functions and, especially, indicate that ANNs seem to admit the fundamental power to overcome the curse of dimensionality when approximating the high-dimensional functions appearing in the above named computational problems.

Image Classification speech-recognition +2

The universal approximation power of finite-width deep ReLU networks

no code implementations ICLR 2019 Dmytro Perekrestenko, Philipp Grohs, Dennis Elbrächter, Helmut Bölcskei

We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable.

Solving the Kolmogorov PDE by means of deep learning

no code implementations1 Jun 2018 Christian Beck, Sebastian Becker, Philipp Grohs, Nor Jaafari, Arnulf Jentzen

Stochastic differential equations (SDEs) and the Kolmogorov partial differential equations (PDEs) associated to them have been widely used in models from engineering, finance, and the natural sciences.

Topology Reduction in Deep Convolutional Feature Extraction Networks

no code implementations10 Jul 2017 Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei

Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes---for fixed network depth $N$---the average number of operationally significant nodes per layer.

Optimal Approximation with Sparsely Connected Deep Neural Networks

no code implementations4 May 2017 Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen

Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.

Energy Propagation in Deep Convolutional Neural Networks

no code implementations12 Apr 2017 Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei

This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates.

Discrete Deep Feature Extraction: A Theory and New Architectures

no code implementations26 May 2016 Thomas Wiatowski, Michael Tschannen, Aleksandar Stanić, Philipp Grohs, Helmut Bölcskei

First steps towards a mathematical theory of deep convolutional neural networks for feature extraction were made---for the continuous-time case---in Mallat, 2012, and Wiatowski and B\"olcskei, 2015.

Facial Landmark Detection Feature Importance +2

Deep Convolutional Neural Networks on Cartoon Functions

no code implementations29 Apr 2016 Philipp Grohs, Thomas Wiatowski, Helmut Bölcskei

Wiatowski and B\"olcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.