Search Results for author: Gérard Biau

Found 17 papers, 8 papers with code

Physics-informed machine learning as a kernel method

no code implementations12 Feb 2024 Nathan Doumèche, Francis Bach, Claire Boyer, Gérard Biau

In this context, we consider a general regression problem where the empirical risk is regularized by a partial differential equation that quantifies the physical inconsistency.

Physics-informed machine learning regression

Implicit regularization of deep residual networks towards neural ODEs

1 code implementation3 Sep 2023 Pierre Marion, Yu-Han Wu, Michael E. Sander, Gérard Biau

Our results are valid for a finite training time, and also as the training time tends to infinity provided that the network satisfies a Polyak-Lojasiewicz condition.

valid

Scaling ResNets in the Large-depth Regime

1 code implementation14 Jun 2022 Pierre Marion, Adeline Fermanian, Gérard Biau, Jean-Philippe Vert

initializations, the only non-trivial dynamics is for $\alpha_L = 1/\sqrt{L}$ (other choices lead either to explosion or to identity mapping).

Optimal 1-Wasserstein Distance for WGANs

no code implementations8 Jan 2022 Arthur Stéphanovitch, Ugo Tanielian, Benoît Cadre, Nicolas Klutchnikoff, Gérard Biau

The mathematical forces at work behind Generative Adversarial Networks raise challenging theoretical issues.

valid

Framing RNN as a kernel method: A neural ODE approach

1 code implementation NeurIPS 2021 Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau

Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature.

SHAFF: Fast and consistent SHApley eFfect estimates via random Forests

1 code implementation25 May 2021 Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet

Interpretability of learning algorithms is crucial for applications involving critical decisions, and variable importance is one of the main interpretation tools.

Wasserstein Random Forests and Applications in Heterogeneous Treatment Effects

1 code implementation8 Jun 2020 Qiming Du, Gérard Biau, François Petit, Raphaël Porcher

We present new insights into causal inference in the context of Heterogeneous Treatment Effects by proposing natural variants of Random Forests to estimate the key conditional distributions.

Causal Inference Philosophy

Some Theoretical Insights into Wasserstein GANs

no code implementations4 Jun 2020 Gérard Biau, Maxime Sangnier, Ugo Tanielian

Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation.

Text Generation

Interpretable Random Forests via Rule Extraction

no code implementations29 Apr 2020 Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet

We introduce SIRUS (Stable and Interpretable RUle Set) for regression, a stable rule learning algorithm which takes the form of a short and simple list of rules.

SIRUS: Stable and Interpretable RUle Set for Classification

no code implementations19 Aug 2019 Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet

State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as "black-boxes" because of the high number and complexity of operations involved in their prediction mechanism.

Classification General Classification

Accelerated Gradient Boosting

1 code implementation6 Mar 2018 Gérard Biau, Benoît Cadre, Laurent Rouvìère

Gradient tree boosting is a prediction algorithm that sequentially produces a model in the form of linear combinations of decision trees, by solving an infinite-dimensional optimization problem.

Optimization by gradient boosting

no code implementations17 Jul 2017 Gérard Biau, Benoît Cadre

Gradient boosting is a state-of-the-art prediction technique that sequentially produces a model in the form of linear combinations of simple predictors---typically decision trees---by solving an infinite-dimensional convex optimization problem.

Neural Random Forests

2 code implementations25 Apr 2016 Gérard Biau, Erwan Scornet, Johannes Welbl

Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights.

regression

A Random Forest Guided Tour

no code implementations18 Nov 2015 Gérard Biau, Erwan Scornet

The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method.

General Classification

Online Asynchronous Distributed Regression

1 code implementation16 Jul 2014 Gérard Biau, Ryad Zenine

Distributed computing offers a high degree of flexibility to accommodate modern learning constraints and the ever increasing size of datasets involved in massive data issues.

Distributed Computing regression

Consistency of random forests

no code implementations12 May 2014 Erwan Scornet, Gérard Biau, Jean-Philippe Vert

What has greatly contributed to the popularity of forests is the fact that they can be applied to a wide range of prediction problems and have few parameters to tune.

Ensemble Learning regression

Cellular Tree Classifiers

no code implementations20 Jan 2013 Gérard Biau, Luc Devroye

The cellular tree classifier model addresses a fundamental problem in the design of classifiers for a parallel or distributed computing world: Given a data set, is it sufficient to apply a majority rule for classification, or shall one split the data into two or more parts and send each part to a potentially different computer (or cell) for further processing?

Distributed Computing General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.