no code implementations • 26 Jun 2024 • Samuel Lanthaler

This work addresses the parametric complexity of neural operator approximations for the general class of Lipschitz continuous operators.

no code implementations • 25 May 2024 • Nikola B. Kovachki, Samuel Lanthaler, Hrushikesh Mhaskar

The second contribution of this work is to show that ``parametric efficiency'' implies ``data efficiency''; using the Fourier neural operator (FNO) as a case study, we show rigorously that on a narrower class of operators, efficiently approximated by FNO in terms of the number of tunable parameters, efficient operator learning is attainable in data complexity as well.

no code implementations • 3 May 2024 • Samuel Lanthaler, Andrew M. Stuart, Margaret Trautner

Operator learning is a variant of machine learning that is designed to approximate maps between function spaces from data.

no code implementations • 24 Feb 2024 • Nikola B. Kovachki, Samuel Lanthaler, Andrew M. Stuart

This review article summarizes recent progress and the current state of our theoretical understanding of neural operators, focusing on an approximation theoretic point of view.

no code implementations • 28 Jun 2023 • Samuel Lanthaler, Andrew M. Stuart

The first contribution of this paper is to prove that for general classes of operators which are characterized only by their $C^r$- or Lipschitz-regularity, operator learning suffers from a ``curse of parametric complexity'', which is an infinite-dimensional analogue of the well-known curse of dimensionality encountered in high-dimensional approximation problems.

2 code implementations • NeurIPS 2023 • Samuel Lanthaler, Nicholas H. Nelsen

This paper provides a comprehensive error analysis of learning with vector-valued random features (RF).

no code implementations • 26 Apr 2023 • Samuel Lanthaler, Zongyi Li, Andrew M. Stuart

A popular variant of neural operators is the Fourier neural operator (FNO).

no code implementations • 28 Mar 2023 • Samuel Lanthaler

Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues.

no code implementations • 3 Oct 2022 • Samuel Lanthaler, Roberto Molinaro, Patrik Hadorn, Siddhartha Mishra

A large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities.

no code implementations • 18 Apr 2021 • Tim De Ryck, Samuel Lanthaler, Siddhartha Mishra

We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.