Search Results for author: Emmanuel de Bézenac

Found 14 papers, 6 papers with code

Convolutional Neural Operators for robust and accurate learning of PDEs

1 code implementation NeurIPS 2023 Bogdan Raonić, Roberto Molinaro, Tim De Ryck, Tobias Rohner, Francesca Bartolucci, Rima Alaifari, Siddhartha Mishra, Emmanuel de Bézenac

Although very successfully used in conventional machine learning, convolution based neural network architectures -- believed to be inconsistent in function space -- have been largely ignored in the context of learning solution operators of PDEs.

Operator learning PDE Surrogate Modeling

Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting

2 code implementations ICLR 2021 Yuan Yin, Vincent Le Guen, Jérémie Dona, Emmanuel de Bézenac, Ibrahim Ayed, Nicolas Thome, Patrick Gallinari

In this work, we introduce the APHYNITY framework, a principled approach for augmenting incomplete physical dynamics described by differential equations with deep data-driven models.

Unifying GANs and Score-Based Diffusion as Generative Particle Models

1 code implementation NeurIPS 2023 Jean-Yves Franceschi, Mike Gartrell, Ludovic Dos Santos, Thibaut Issenhuth, Emmanuel de Bézenac, Mickaël Chen, Alain Rakotomamonjy

Particle-based deep generative models, such as gradient flows and score-based diffusion models, have recently gained traction thanks to their striking performance.

LEADS: Learning Dynamical Systems that Generalize Across Environments

1 code implementation NeurIPS 2021 Yuan Yin, Ibrahim Ayed, Emmanuel de Bézenac, Nicolas Baskiotis, Patrick Gallinari

Both are sub-optimal: the former disregards the discrepancies between environments leading to biased solutions, while the latter does not exploit their potential commonalities and is prone to scarcity problems.

A Neural Tangent Kernel Perspective of GANs

1 code implementation10 Jun 2021 Jean-Yves Franceschi, Emmanuel de Bézenac, Ibrahim Ayed, Mickaël Chen, Sylvain Lamprier, Patrick Gallinari

We propose a novel theoretical framework of analysis for Generative Adversarial Networks (GANs).

Learning Dynamical Systems from Partial Observations

no code implementations26 Feb 2019 Ibrahim Ayed, Emmanuel de Bézenac, Arthur Pajot, Julien Brajard, Patrick Gallinari

We consider the problem of forecasting complex, nonlinear space-time processes when observations provide only partial information of on the system's state.

Optimal Unsupervised Domain Translation

no code implementations4 Jun 2019 Emmanuel de Bézenac, Ibrahim Ayed, Patrick Gallinari

Domain Translation is the problem of finding a meaningful correspondence between two domains.

Translation

A Principle of Least Action for the Training of Neural Networks

1 code implementation17 Sep 2020 Skander Karkar, Ibrahim Ayed, Emmanuel de Bézenac, Patrick Gallinari

From this observation, we reformulate the learning problem as follows: finding neural networks which solve the task while transporting the data as efficiently as possible.

Learning Theory

Deep Rao-Blackwellised Particle Filters for Time Series Forecasting

no code implementations NeurIPS 2020 Richard Kurle, Syama Sundar Rangapuram, Emmanuel de Bézenac, Stephan Günnemann, Jan Gasthaus

We propose a Monte Carlo objective that leverages the conditional linearity by computing the corresponding conditional expectations in closed-form and a suitable proposal distribution that is factorised similarly to the optimal proposal distribution.

Time Series Time Series Forecasting

Unsupervised Spatiotemporal Data Inpainting

no code implementations25 Sep 2019 Yuan Yin, Arthur Pajot, Emmanuel de Bézenac, Patrick Gallinari

We tackle the problem of inpainting occluded area in spatiotemporal sequences, such as cloud occluded satellite observations, in an unsupervised manner.

Generative Adversarial Network

Block-wise Training of Residual Networks via the Minimizing Movement Scheme

no code implementations3 Oct 2022 Skander Karkar, Ibrahim Ayed, Emmanuel de Bézenac, Patrick Gallinari

End-to-end backpropagation has a few shortcomings: it requires loading the entire model during training, which can be impossible in constrained settings, and suffers from three locking problems (forward locking, update locking and backward locking), which prohibit training the layers in parallel.

An operator preconditioning perspective on training in physics-informed machine learning

no code implementations9 Oct 2023 Tim De Ryck, Florent Bonnet, Siddhartha Mishra, Emmanuel de Bézenac

In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize residuals connected to partial differential equations (PDEs).

Physics-informed machine learning

Cannot find the paper you are looking for? You can Submit a new open access paper.