1 code implementation • 8 Nov 2023 • Michaela Hardt, William R. Orchard, Patrick Blöbaum, Shiva Kasiviswanathan, Elke Kirschbaum
Although the machine learning and systems research communities have proposed various techniques to tackle this problem, there is currently a lack of standardized datasets for quantitative benchmarking.
no code implementations • 19 Jul 2023 • Michael Oesterle, Patrick Blöbaum, Atalanti A. Mastakouri, Elke Kirschbaum
Which set of features was responsible for a certain output of a machine learning model?
no code implementations • 16 May 2023 • Elias Eulig, Atalanti A. Mastakouri, Patrick Blöbaum, Michaela Hardt, Dominik Janzing
By comparing the number of inconsistencies with those on the surrogate baseline, we derive an interpretable metric that captures whether the DAG fits significantly better than random.
2 code implementations • 2 Feb 2023 • Patrick Chao, Patrick Blöbaum, Shiva Prasad Kasiviswanathan
We consider the problem of answering observational, interventional, and counterfactual queries in a causally sufficient setting where only observational data and the causal graph are available.
no code implementations • 12 Jan 2023 • Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton, Patrick Blöbaum
In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems.
1 code implementation • 10 Jan 2023 • Muhammad Faaiz Taufiq, Patrick Blöbaum, Lenon Minorics
Shapley values are model-agnostic methods for explaining model predictions.
1 code implementation • 14 Dec 2022 • Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas
Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data.
2 code implementations • 14 Jun 2022 • Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing
We introduce DoWhy-GCM, an extension of the DoWhy Python library, that leverages graphical causal models.
no code implementations • 1 Jul 2020 • Dominik Janzing, Patrick Blöbaum, Atalanti A. Mastakouri, Philipp M. Faller, Lenon Minorics, Kailash Budhathoki
We propose a notion of causal influence that describes the `intrinsic' part of the contribution of a node on a target node in a DAG.
no code implementations • 5 Dec 2019 • Dominik Janzing, Kailash Budhathoki, Lenon Minorics, Patrick Blöbaum
We describe a formal approach to identify 'root causes' of outliers observed in $n$ variables $X_1,\dots, X_n$ in a scenario where the causal relation between the variables is a known directed acyclic graph (DAG).
no code implementations • 29 Oct 2019 • Dominik Janzing, Lenon Minorics, Patrick Blöbaum
We discuss promising recent contributions on quantifying feature relevance using Shapley values, where we observed some confusion on which probability distribution is the right one for dropped features.
no code implementations • 19 Feb 2018 • Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, Bernhard Schölkopf
We address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions.
1 code implementation • 3 Sep 2017 • Patrick Blöbaum, Shohei Shimizu
The interpretability of prediction mechanisms with respect to the underlying prediction problem is often unclear.
no code implementations • 11 Oct 2016 • Patrick Blöbaum, Takashi Washio, Shohei Shimizu
It is generally difficult to make any statements about the expected prediction error in an univariate setting without further knowledge about how the data were generated.