no code implementations • 18 Oct 2023 • Clément Bénard, Jeffrey Näf, Julie Josse
Distributional Random Forest (DRF) is a flexible forest-based method to estimate the full conditional distribution of a multivariate output of interest given input variables.
1 code implementation • 7 Aug 2023 • Clément Bénard, Julie Josse
In this article, we develop a new importance variable algorithm for causal forests, to quantify the impact of each input on the heterogeneity of treatment effects.
1 code implementation • NeurIPS 2023 • Clément Bénard, Brian Staber, Sébastien da Veiga
Stein thinning is a promising algorithm proposed by (Riabiz et al., 2022) for post-processing outputs of Markov chain Monte Carlo (MCMC).
1 code implementation • 25 May 2021 • Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet
Interpretability of learning algorithms is crucial for applications involving critical decisions, and variable importance is one of the main interpretation tools.
no code implementations • 26 Feb 2021 • Clément Bénard, Sébastien da Veiga, Erwan Scornet
Variable importance measures are the main tools to analyze the black-box mechanisms of random forests.
no code implementations • 29 Apr 2020 • Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet
We introduce SIRUS (Stable and Interpretable RUle Set) for regression, a stable rule learning algorithm which takes the form of a short and simple list of rules.
no code implementations • 19 Aug 2019 • Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet
State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as "black-boxes" because of the high number and complexity of operations involved in their prediction mechanism.