Search Results for author: Tim De Ryck

Found 11 papers, 2 papers with code

An operator preconditioning perspective on training in physics-informed machine learning

no code implementations9 Oct 2023 Tim De Ryck, Florent Bonnet, Siddhartha Mishra, Emmanuel de Bézenac

In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize residuals connected to partial differential equations (PDEs).

Physics-informed machine learning

Convolutional Neural Operators for robust and accurate learning of PDEs

1 code implementation NeurIPS 2023 Bogdan Raonić, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, Emmanuel de Bézenac

Although very successfully used in conventional machine learning, convolution based neural network architectures -- believed to be inconsistent in function space -- have been largely ignored in the context of learning solution operators of PDEs.

Operator learning PDE Surrogate Modeling

wPINNs: Weak Physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws

no code implementations18 Jul 2022 Tim De Ryck, Siddhartha Mishra, Roberto Molinaro

Physics informed neural networks (PINNs) require regularity of solutions of the underlying PDE to guarantee accurate approximation.

Error analysis for deep neural network approximations of parametric hyperbolic conservation laws

no code implementations15 Jul 2022 Tim De Ryck, Siddhartha Mishra

We derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks.

Variable-Input Deep Operator Networks

no code implementations23 May 2022 Michael Prasthofer, Tim De Ryck, Siddhartha Mishra

Existing architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability.

Operator learning

Generic bounds on the approximation error for physics-informed (and) operator learning

no code implementations23 May 2022 Tim De Ryck, Siddhartha Mishra

We propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning.

Operator learning

Error estimates for physics informed neural networks approximating the Navier-Stokes equations

no code implementations17 Mar 2022 Tim De Ryck, Ameya D. Jagtap, Siddhartha Mishra

We prove rigorous bounds on the errors resulting from the approximation of the incompressible Navier-Stokes equations with (extended) physics informed neural networks.

Error analysis for physics informed neural networks (PINNs) approximating Kolmogorov PDEs

no code implementations28 Jun 2021 Tim De Ryck, Siddhartha Mishra

Moreover, we prove that the size of the PINNs and the number of training samples only grow polynomially with the underlying dimension, enabling PINNs to overcome the curse of dimensionality in this context.

On the approximation of functions by tanh neural networks

no code implementations18 Apr 2021 Tim De Ryck, Samuel Lanthaler, Siddhartha Mishra

We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function.

Change Point Detection in Time Series Data using Autoencoders with a Time-Invariant Representation

2 code implementations21 Aug 2020 Tim De Ryck, Maarten De Vos, Alexander Bertrand

Detectable change points include abrupt changes in the slope, mean, variance, autocorrelation function and frequency spectrum.

Change Point Detection Time Series +1

On the approximation of rough functions with deep neural networks

no code implementations13 Dec 2019 Tim De Ryck, Siddhartha Mishra, Deep Ray

Deep neural networks and the ENO procedure are both efficient frameworks for approximating rough functions.

Data Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.