1 code implementation • 10 Jan 2023 • Muhammad Faaiz Taufiq, Patrick Blöbaum, Lenon Minorics
Shapley values are model-agnostic methods for explaining model predictions.
1 code implementation • 3 Oct 2022 • Mononito Goswami, Cristian Challu, Laurent Callot, Lenon Minorics, Andrey Kan
The practical problem of selecting the most accurate model for a given dataset without labels has received little attention in the literature.
no code implementations • 23 Feb 2022 • Lenon Minorics, Caner Turkmen, David Kernert, Patrick Bloebaum, Laurent Callot, Dominik Janzing
This paper proposes a new approach for testing Granger non-causality on panel data.
no code implementations • 4 Feb 2022 • You-Lin Chen, Lenon Minorics, Dominik Janzing
We propose a method to distinguish causal influence from hidden confounding in the following scenario: given a target variable Y, potential causal drivers X, and a large number of background features, we propose a novel criterion for identifying causal relationship based on the stability of regression coefficients of X on Y with respect to selecting different background features.
1 code implementation • 18 Nov 2021 • Leena Chennuru Vankadara, Philipp Michael Faller, Michaela Hardt, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing
Under causal sufficiency, the problem of causal generalization amounts to learning under covariate shifts, albeit with additional structure (restriction to interventional distributions under the VAR model).
no code implementations • 1 Jul 2020 • Dominik Janzing, Patrick Blöbaum, Atalanti A. Mastakouri, Philipp M. Faller, Lenon Minorics, Kailash Budhathoki
We propose a notion of causal influence that describes the `intrinsic' part of the contribution of a node on a target node in a DAG.
no code implementations • 5 Dec 2019 • Dominik Janzing, Kailash Budhathoki, Lenon Minorics, Patrick Blöbaum
We describe a formal approach to identify 'root causes' of outliers observed in $n$ variables $X_1,\dots, X_n$ in a scenario where the causal relation between the variables is a known directed acyclic graph (DAG).
no code implementations • 29 Oct 2019 • Dominik Janzing, Lenon Minorics, Patrick Blöbaum
We discuss promising recent contributions on quantifying feature relevance using Shapley values, where we observed some confusion on which probability distribution is the right one for dropped features.