Search Results for author: Julie Josse

Found 23 papers, 18 papers with code

MMD-based Variable Importance for Distributional Random Forest

no code implementations18 Oct 2023 Clément Bénard, Jeffrey Näf, Julie Josse

Distributional Random Forest (DRF) is a flexible forest-based method to estimate the full conditional distribution of a multivariate output of interest given input variables.

Positivity-free Policy Learning with Observational Data

1 code implementation10 Oct 2023 Pan Zhao, Antoine Chambaz, Julie Josse, Shu Yang

Policy learning utilizing observational data is pivotal across various domains, with the objective of learning the optimal treatment assignment policy while adhering to specific constraints such as fairness, budget, and simplicity.

Fairness

Variable importance for causal forests: breaking down the heterogeneity of treatment effects

1 code implementation7 Aug 2023 Clément Bénard, Julie Josse

In this article, we develop a new importance variable algorithm for causal forests, to quantify the impact of each input on the heterogeneity of treatment effects.

Conformal Prediction with Missing Values

1 code implementation5 Jun 2023 Margaux Zaffran, Aymeric Dieuleveut, Julie Josse, Yaniv Romano

This motivates our novel generalized conformalized quantile regression framework, missing data augmentation, which yields prediction intervals that are valid conditionally to the patterns of missing values, despite their exponential number.

Conformal Prediction Data Augmentation +5

Efficient and robust transfer learning of optimal individualized treatment regimes with right-censored survival data

1 code implementation13 Jan 2023 Pan Zhao, Julie Josse, Shu Yang

We present an efficient and robust transfer learning framework for estimating the optimal ITR with right-censored survival data that generalizes well to the target population.

counterfactual Transfer Learning

Benchmarking missing-values approaches for predictive models on health databases

1 code implementation17 Feb 2022 Alexandre Perez-Lebel, Gaël Varoquaux, Marine Le Morvan, Julie Josse, Jean-Baptiste Poline

Using gradient-boosted trees, we compare native support for missing values with simple and state-of-the-art imputation prior to learning.

Attribute Benchmarking +1

Adaptive Conformal Predictions for Time Series

2 code implementations15 Feb 2022 Margaux Zaffran, Aymeric Dieuleveut, Olivier Féron, Yannig Goude, Julie Josse

While recent works tackled this issue, we argue that Adaptive Conformal Inference (ACI, Gibbs and Cand{\`e}s, 2021), developed for distribution-shift time series, is a good procedure for time series with general dependency.

Conformal Prediction Decision Making +4

Model-based Clustering with Missing Not At Random Data

1 code implementation20 Dec 2021 Aude Sportisse, Matthieu Marbac, Fabien Laporte, Gilles Celeux, Claire Boyer, Julie Josse, Christophe Biernacki

In this paper, we propose model-based clustering algorithms designed to handle very general types of missing data, including MNAR data.

Clustering Imputation

What’s a good imputation to predict with missing values?

no code implementations NeurIPS 2021 Marine Le Morvan, Julie Josse, Erwan Scornet, Gael Varoquaux

In fact, we show that on perfectly imputed data the best regression function will generally be discontinuous, which makes it hard to learn.

Imputation regression

What's a good imputation to predict with missing values?

1 code implementation1 Jun 2021 Marine Le Morvan, Julie Josse, Erwan Scornet, Gaël Varoquaux

In fact, we show that on perfectly imputed data the best regression function will generally be discontinuous, which makes it hard to learn.

Imputation regression

NeuMiss networks: differentiable programming for supervised learning with missing values

no code implementations3 Jul 2020 Marine Le Morvan, Julie Josse, Thomas Moreau, Erwan Scornet, Gaël Varoquaux

We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns.

Imputation

MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent Variable Models

1 code implementation25 Feb 2020 Imke Mayer, Julie Josse, Félix Raimundo, Jean-Philippe Vert

Inferring causal effects of a treatment, intervention or policy from observational data is central to many applications.

Causal Inference Imputation

Missing Data Imputation using Optimal Transport

1 code implementation ICML 2020 Boris Muzellec, Julie Josse, Claire Boyer, Marco Cuturi

Missing data is a crucial issue when applying machine learning algorithms to real-world datasets.

Imputation

Linear predictor on linearly-generated data with missing values: non consistency and solutions

1 code implementation3 Feb 2020 Marine Le Morvan, Nicolas Prost, Julie Josse, Erwan Scornet, Gaël Varoquaux

In the particular Gaussian case, it can be written as a linear function of multiway interactions between the observed data and the various missing-value indicators.

Generalization Bounds

Doubly robust treatment effect estimation with missing attributes

2 code implementations23 Oct 2019 Imke Mayer, Erik Sverdrup, Tobias Gauss, Jean-Denis Moyer, Stefan Wager, Julie Josse

We find, however, that doubly robust modifications of standard methods for average treatment effect estimation with missing data repeatedly perform better than their non-doubly robust baselines; for example, doubly robust generalized propensity score methods beat inverse-weighting with the generalized propensity score.

Methodology 93C41, 62G35, 62F35, 62P10

Adaptive Bayesian SLOPE -- High-dimensional Model Selection with Missing Values

3 code implementations14 Sep 2019 Wei Jiang, Malgorzata Bogdan, Julie Josse, Blazej Miasojedow, Veronika Rockova, Traumabase group

We consider the problem of variable selection in high-dimensional settings with missing observations among the covariates.

Methodology Applications Computation

Estimation and imputation in Probabilistic Principal Component Analysis with Missing Not At Random data

1 code implementation NeurIPS 2020 Aude Sportisse, Claire Boyer, Julie Josse

Considering a data matrix generated from a probabilistic principal component analysis (PPCA) model containing several MNAR variables, not necessarily under the same self-masked missing mechanism, we propose estimators for the means, variances and covariances of the variables and study their consistency.

Statistics Theory Statistics Theory

On the consistency of supervised learning with missing values

3 code implementations19 Feb 2019 Julie Josse, Jacob M. Chen, Nicolas Prost, Erwan Scornet, Gaël Varoquaux

A striking result is that the widely-used method of imputing with a constant, such as the mean prior to learning is consistent when missing values are not informative.

Attribute Imputation

Imputation and low-rank estimation with Missing Not At Random data

1 code implementation29 Dec 2018 Aude Sportisse, Claire Boyer, Julie Josse

Our second contribution is to suggest a computationally efficient surrogate estimation by implicitly taking into account the joint distribution of the data and the missing mechanism: the data matrix is concatenated with the mask coding for the missing values; a low-rank structure for exponential family is assumed on this new matrix, in order to encode links between variables and missing mechanisms.

Imputation Matrix Completion

Low-rank Interaction with Sparse Additive Effects Model for Large Data Frames

no code implementations NeurIPS 2018 Geneviève Robin, Hoi-To Wai, Julie Josse, Olga Klopp, Éric Moulines

In this paper, we introduce a low-rank interaction and sparse additive effects (LORIS) model which combines matrix regression on a dictionary and low-rank design, to estimate main effects and interactions simultaneously.

Clustering Imputation

Stochastic Approximation EM for Logistic Regression with Missing Values

1 code implementation11 May 2018 Wei Jiang, Julie Josse, Marc Lavielle, Traumabase group

We propose a complete approach, including the estimation of parameters and their variance, derivation of confidence intervals, a model selection procedure, and a method for prediction on test sets with missing values.

Methodology

Bootstrap-Based Regularization for Low-Rank Matrix Estimation

no code implementations30 Oct 2014 Julie Josse, Stefan Wager

In the simplest case, with an isotropic noise model, our method is equivalent to a classical singular value shrinkage estimator.

Cannot find the paper you are looking for? You can Submit a new open access paper.