1 code implementation • 2 Oct 2024 • Pierre Marion, Raphaël Berthier, Gérard Biau, Claire Boyer
To address this gap, we introduce the single-location regression task, where only one token in a sequence determines the output, and its position is a latent random variable, retrievable via a linear projection of the input.
1 code implementation • 20 Sep 2024 • Nathan Doumèche, Francis Bach, Gérard Biau, Claire Boyer
Building on the formulation of the problem as a kernel regression task, we use Fourier methods to approximate the associated kernel, and propose a tractable estimator that minimizes the physics-informed risk function.
1 code implementation • 12 Feb 2024 • Nathan Doumèche, Francis Bach, Gérard Biau, Claire Boyer
In this context, we consider a general regression problem where the empirical risk is regularized by a partial differential equation that quantifies the physical inconsistency.
1 code implementation • 3 Sep 2023 • Pierre Marion, Yu-Han Wu, Michael E. Sander, Gérard Biau
Our results are valid for a finite training time, and also as the training time tends to infinity provided that the network satisfies a Polyak-Lojasiewicz condition.
1 code implementation • 14 Jun 2022 • Pierre Marion, Adeline Fermanian, Gérard Biau, Jean-Philippe Vert
Deep ResNets are recognized for achieving state-of-the-art results in complex machine learning tasks.
no code implementations • 8 Jan 2022 • Arthur Stéphanovitch, Ugo Tanielian, Benoît Cadre, Nicolas Klutchnikoff, Gérard Biau
The mathematical forces at work behind Generative Adversarial Networks raise challenging theoretical issues.
1 code implementation • NeurIPS 2021 • Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau
Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature.
1 code implementation • 25 May 2021 • Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet
Interpretability of learning algorithms is crucial for applications involving critical decisions, and variable importance is one of the main interpretation tools.
1 code implementation • 8 Jun 2020 • Qiming Du, Gérard Biau, François Petit, Raphaël Porcher
We present new insights into causal inference in the context of Heterogeneous Treatment Effects by proposing natural variants of Random Forests to estimate the key conditional distributions.
no code implementations • 4 Jun 2020 • Gérard Biau, Maxime Sangnier, Ugo Tanielian
Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation.
no code implementations • 29 Apr 2020 • Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet
We introduce SIRUS (Stable and Interpretable RUle Set) for regression, a stable rule learning algorithm which takes the form of a short and simple list of rules.
no code implementations • 19 Aug 2019 • Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet
State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as "black-boxes" because of the high number and complexity of operations involved in their prediction mechanism.
1 code implementation • 6 Mar 2018 • Gérard Biau, Benoît Cadre, Laurent Rouvìère
Gradient tree boosting is a prediction algorithm that sequentially produces a model in the form of linear combinations of decision trees, by solving an infinite-dimensional optimization problem.
no code implementations • 17 Jul 2017 • Gérard Biau, Benoît Cadre
Gradient boosting is a state-of-the-art prediction technique that sequentially produces a model in the form of linear combinations of simple predictors---typically decision trees---by solving an infinite-dimensional convex optimization problem.
2 code implementations • 25 Apr 2016 • Gérard Biau, Erwan Scornet, Johannes Welbl
Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights.
no code implementations • 18 Nov 2015 • Gérard Biau, Erwan Scornet
The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method.
1 code implementation • 16 Jul 2014 • Gérard Biau, Ryad Zenine
Distributed computing offers a high degree of flexibility to accommodate modern learning constraints and the ever increasing size of datasets involved in massive data issues.
no code implementations • 12 May 2014 • Erwan Scornet, Gérard Biau, Jean-Philippe Vert
What has greatly contributed to the popularity of forests is the fact that they can be applied to a wide range of prediction problems and have few parameters to tune.
no code implementations • 20 Jan 2013 • Gérard Biau, Luc Devroye
The cellular tree classifier model addresses a fundamental problem in the design of classifiers for a parallel or distributed computing world: Given a data set, is it sufficient to apply a majority rule for classification, or shall one split the data into two or more parts and send each part to a potentially different computer (or cell) for further processing?