1 code implementation • 7 Jun 2023 • Toon Vanderschueren, Alicia Curth, Wouter Verbeke, Mihaela van der Schaar
Machine learning (ML) holds great potential for accurately forecasting treatment outcomes over time, which could ultimately enable the adoption of more individualized treatment strategies in many practical applications.
2 code implementations • 23 Feb 2023 • Alicia Curth, Mihaela van der Schaar
We study the problem of inferring heterogeneous treatment effects (HTEs) from time-to-event data in the presence of competing events.
1 code implementation • 6 Feb 2023 • Alicia Curth, Mihaela van der Schaar
Personalized treatment effect estimates are often of interest in high-stakes applications -- thus, before deploying a model estimating such effects in practice, one needs to be sure that the best candidate from the ever-growing machine learning toolbox for this task was chosen.
no code implementations • 11 Aug 2022 • Alicia Curth, Alihan Hüyük, Mihaela van der Schaar
We study the problem of adaptively identifying patient subpopulations that benefit from a given treatment during a confirmatory clinical trial.
no code implementations • 16 Jun 2022 • Jonathan Crabbé, Alicia Curth, Ioana Bica, Mihaela van der Schaar
This allows us to evaluate treatment effect estimators along a new and important dimension that has been overlooked in previous work: We construct a benchmarking environment to empirically investigate the ability of personalized treatment effect models to identify predictive covariates -- covariates that determine differential responses to treatment.
1 code implementation • 15 Jun 2022 • Daniel Jarrett, Bogdan Cebere, Tennison Liu, Alicia Curth, Mihaela van der Schaar
Consider the problem of imputing missing values in a dataset.
2 code implementations • ICLR 2022 • Alex J. Chan, Alicia Curth, Mihaela van der Schaar
Human decision making is well known to be imperfect and the ability to analyse such processes individually is crucial when attempting to aid or improve a decision-maker's ability to perform a task, e. g. to alert them to potential biases or oversights on their part.
1 code implementation • 25 Feb 2022 • Tobias Hatt, Jeroen Berrevoets, Alicia Curth, Stefan Feuerriegel, Mihaela van der Schaar
While observational data is confounded, randomized data is unconfounded, but its sample size is usually too small to learn heterogeneous treatment effects.
no code implementations • 7 Dec 2021 • Jeroen Berrevoets, Alicia Curth, Ioana Bica, Eoin McKinney, Mihaela van der Schaar
Choosing the best treatment-plan for each individual patient requires accurate forecasts of their outcome trajectories as a function of the treatment, over time.
1 code implementation • NeurIPS 2021 • Zhaozhi Qian, Alicia Curth, Mihaela van der Schaar
Most existing methods for conditional average treatment effect estimation are designed to estimate the effect of a single cause - only one variable can be intervened on at one time.
1 code implementation • NeurIPS 2021 • Alicia Curth, Changhee Lee, Mihaela van der Schaar
We study the problem of inferring heterogeneous treatment effects from time-to-event data.
no code implementations • 28 Jul 2021 • Alicia Curth, Mihaela van der Schaar
The machine learning toolbox for estimation of heterogeneous treatment effects from observational data is expanding rapidly, yet many of its algorithms have been evaluated only on a very limited set of semi-synthetic benchmark datasets.
1 code implementation • NeurIPS 2021 • Alicia Curth, Mihaela van der Schaar
We investigate how to exploit structural similarities of an individual's potential outcomes (POs) under different treatments to obtain better estimates of conditional average treatment effects in finite samples.
2 code implementations • 26 Jan 2021 • Alicia Curth, Mihaela van der Schaar
The need to evaluate treatment effectiveness is ubiquitous in most of empirical science, and interest in flexibly investigating effect heterogeneity is growing rapidly.
1 code implementation • 14 Aug 2020 • Alicia Curth, Ahmed M. Alaa, Mihaela van der Schaar
Within this framework, we propose two general learning algorithms that build on the idea of nonparametric plug-in bias removal via IFs: the 'IF-learner' which uses pseudo-outcomes motivated by uncentered IFs for regression in large samples and outputs entire target functions without confidence bands, and the 'Group-IF-learner', which outputs only approximations to a function but can give confidence estimates if sufficient information on coarsening mechanisms is available.