Search Results for author: Michael Oberst

Found 10 papers, 8 papers with code

Falsification of Internal and External Validity in Observational Studies via Conditional Moment Restrictions

no code implementations30 Jan 2023 Zeshan Hussain, Ming-Chieh Shih, Michael Oberst, Ilker Demirel, David Sontag

Our approach is interpretable, allowing a practitioner to visualize which subgroups in the population lead to falsification of an observational study.

Falsification before Extrapolation in Causal Effect Estimation

1 code implementation27 Sep 2022 Zeshan Hussain, Michael Oberst, Ming-Chieh Shih, David Sontag

Under the assumption that at least one observational estimator is asymptotically normal and consistent for both the validation and extrapolated effects, we provide guarantees on the coverage probability of the intervals output by our algorithm.

Selection bias

Evaluating Robustness to Dataset Shift via Parametric Robustness Sets

1 code implementation31 May 2022 Nikolaj Thams, Michael Oberst, David Sontag

We give a method for proactively identifying small, plausible shifts in distribution which lead to large differences in model performance.

Regularizing towards Causal Invariance: Linear Models with Proxies

1 code implementation3 Mar 2021 Michael Oberst, Nikolaj Thams, Jonas Peters, David Sontag

In the case of two proxy variables, we propose a modified estimator that is prediction optimal under interventions up to a known strength.

Treatment Policy Learning in Multiobjective Settings with Fully Observed Outcomes

1 code implementation1 Jun 2020 Soorajnath Boominathan, Michael Oberst, Helen Zhou, Sanjat Kanjilal, David Sontag

In several medical decision-making problems, such as antibiotic prescription, laboratory testing can provide precise indications for how a patient will respond to different treatment options.

Decision Making

Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models

1 code implementation14 May 2019 Michael Oberst, David Sontag

We introduce an off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned (RL) policy is likely to have produced a substantially different outcome than the observed policy.

Management Off-policy evaluation

Cannot find the paper you are looking for? You can Submit a new open access paper.