1 code implementation • 18 Jul 2023 • Philipp M. Faller, Leena Chennuru Vankadara, Atalanti A. Mastakouri, Francesco Locatello, Dominik Janzing
In this work, we propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth.
no code implementations • 16 May 2023 • Elias Eulig, Atalanti A. Mastakouri, Patrick Blöbaum, Michaela Hardt, Dominik Janzing
By comparing the number of inconsistencies with those on the surrogate baseline, we derive an interpretable metric that captures whether the DAG fits significantly better than random.
no code implementations • 11 May 2023 • Dominik Janzing, Philipp M. Faller, Leena Chennuru Vankadara
Here, causal discovery becomes more modest and better accessible to empirical tests than usual: rather than trying to find a causal hypothesis that is `true' a causal hypothesis is {\it useful} whenever it correctly predicts statistical properties of unobserved joint distributions.
no code implementations • 10 May 2023 • Bijan Mazaheri, Atalanti Mastakouri, Dominik Janzing, Michaela Hardt
Statistical prediction models are often trained on data from different probability distributions than their eventual use cases.
no code implementations • 23 Apr 2023 • Yuchen Zhu, Kailash Budhathoki, Jonas Kuebler, Dominik Janzing
On the positive side, we show that cause-effect relations can be aggregated when the macro interventions are such that the distribution of micro states is the same as in the observational distribution; we term this natural macro interventions.
no code implementations • 4 Apr 2023 • Numair Sani, Atalanti A. Mastakouri, Dominik Janzing
In the absence of such assumptions, existing work requires multiple observations of datasets that contain the same treatment and outcome variables, in order to establish bounds on these probabilities.
no code implementations • 15 Nov 2022 • Dominik Janzing, Sergio Hernan Garrido Mejia
Discussions on causal relations in real life often consider variables for which the definition of causality is unclear since the notion of interventions on the respective variables is obscure.
no code implementations • 26 Jun 2022 • Kailash Budhathoki, George Michailidis, Dominik Janzing
Existing methods of explainable AI and interpretable ML cannot explain change in the values of an output variable for a statistical unit in terms of the change in the input values and the change in the "mechanism" (the function transforming input to output).
2 code implementations • 14 Jun 2022 • Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing
We introduce DoWhy-GCM, an extension of the DoWhy Python library, that leverages graphical causal models.
no code implementations • 8 Mar 2022 • Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello
This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.
no code implementations • 23 Feb 2022 • Lenon Minorics, Caner Turkmen, David Kernert, Patrick Bloebaum, Laurent Callot, Dominik Janzing
This paper proposes a new approach for testing Granger non-causality on panel data.
no code implementations • 4 Feb 2022 • You-Lin Chen, Lenon Minorics, Dominik Janzing
We propose a method to distinguish causal influence from hidden confounding in the following scenario: given a target variable Y, potential causal drivers X, and a large number of background features, we propose a novel criterion for identifying causal relationship based on the stability of regression coefficients of X on Y with respect to selecting different background features.
1 code implementation • 2 Feb 2022 • Luigi Gresele, Julius von Kügelgen, Jonas M. Kübler, Elke Kirschbaum, Bernhard Schölkopf, Dominik Janzing
We introduce an approach to counterfactual inference based on merging information from multiple datasets.
1 code implementation • 18 Nov 2021 • Leena Chennuru Vankadara, Philipp Michael Faller, Michaela Hardt, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing
Under causal sufficiency, the problem of causal generalization amounts to learning under covariate shifts, albeit with additional structure (restriction to interventional distributions under the VAR model).
no code implementations • 29 Oct 2021 • Michel Besserve, Naji Shajarisales, Dominik Janzing, Bernhard Schölkopf
A new perspective has been provided based on the principle of Independence of Causal Mechanisms (ICM), leading to the Spectral Independence Criterion (SIC), postulating that the power spectral density (PSD) of the cause time series is uncorrelated with the squared modulus of the frequency response of the filter generating the effect.
no code implementations • ICLR 2022 • Osama Makansi, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Dominik Janzing, Thomas Brox, Bernhard Schölkopf
Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions.
no code implementations • 15 Jul 2021 • Sergio Hernan Garrido Mejia, Elke Kirschbaum, Dominik Janzing
Another similarly important and challenging task is to quantify the causal influence of a treatment on a target in the presence of confounders.
no code implementations • 26 Feb 2021 • Kailash Budhathoki, Dominik Janzing, Patrick Bloebaum, Hoiyi Ng
We describe a formal approach based on graphical causal models to identify the "root causes" of the change in the probability distribution of variables.
no code implementations • 7 Feb 2021 • Dominik Janzing
The Principle of Insufficient Reason (PIR) assigns equal probabilities to each alternative of a random experiment whenever there is no reason to prefer one over the other.
no code implementations • 1 Jul 2020 • Dominik Janzing, Patrick Blöbaum, Atalanti A. Mastakouri, Philipp M. Faller, Lenon Minorics, Kailash Budhathoki
We propose a notion of causal influence that describes the `intrinsic' part of the contribution of a node on a target node in a DAG.
no code implementations • 18 May 2020 • Atalanti A. Mastakouri, Bernhard Schölkopf, Dominik Janzing
We study the identification of direct and indirect causes on time series and provide conditions in the presence of latent variables, which we prove to be necessary and sufficient under some graph constraints.
no code implementations • 1 Apr 2020 • Michel Besserve, Rémy Sun, Dominik Janzing, Bernhard Schölkopf
Generative models can be trained to emulate complex empirical data, but are they useful to make predictions in the context of previously unobserved environments?
no code implementations • 5 Dec 2019 • Dominik Janzing, Kailash Budhathoki, Lenon Minorics, Patrick Blöbaum
We describe a formal approach to identify 'root causes' of outliers observed in $n$ variables $X_1,\dots, X_n$ in a scenario where the causal relation between the variables is a known directed acyclic graph (DAG).
no code implementations • NeurIPS 2019 • Kristof Meding, Dominik Janzing, Bernhard Schölkopf, Felix A. Wichmann
We employ a so-called frozen noise paradigm enabling us to compare human performance with four different algorithms on a trial-by-trial basis: A causal inference algorithm exploiting the dependence structure of additive noise terms, a neurally inspired network, a Bayesian ideal observer model as well as a simple heuristic.
no code implementations • NeurIPS 2019 • Atalanti Mastakouri, Bernhard Schölkopf, Dominik Janzing
We propose a constraint-based causal feature selection method for identifying causes of a given target variable, selecting from a set of candidate variables, while there can also be hidden variables acting as common causes with the target.
no code implementations • 29 Oct 2019 • Dominik Janzing, Lenon Minorics, Patrick Blöbaum
We discuss promising recent contributions on quantifying feature relevance using Shapley values, where we observed some confusion on which probability distribution is the right one for dropped features.
1 code implementation • NeurIPS 2019 • Dominik Janzing
I argue that regularizing terms in standard regression methods not only help against overfitting finite data, but sometimes also yield better causal models in the infinite sample regime.
no code implementations • 9 Apr 2018 • Dominik Janzing
Here, causal inference becomes more modest and better accessible to empirical tests than usual: rather than trying to find a causal hypothesis that is 'true' (which is a problematic term when it is unclear how to define interventions) a causal hypothesis is useful whenever it correctly predicts statistical properties of unobserved joint distributions.
Statistics Theory Statistics Theory
no code implementations • ICML 2018 • Dominik Janzing, Bernhard Schoelkopf
We consider linear models where $d$ potential causes $X_1,..., X_d$ are correlated with one target quantity $Y$ and propose a method to infer whether the association is causal or whether it is an artifact caused by overfitting or hidden common causes.
no code implementations • 19 Feb 2018 • Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, Bernhard Schölkopf
We address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions.
no code implementations • ICLR 2018 • Michel Besserve, Dominik Janzing, Bernhard Schoelkopf
Generative models are important tools to capture and investigate the properties of complex empirical data.
no code implementations • 4 Jul 2017 • Paul K. Rubenstein, Sebastian Weichwald, Stephan Bongers, Joris M. Mooij, Dominik Janzing, Moritz Grosse-Wentrup, Bernhard Schölkopf
Complex systems can be modelled at various levels of detail.
no code implementations • NeurIPS 2017 • Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf
Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning.
no code implementations • 5 May 2017 • Michel Besserve, Naji Shajarisales, Bernhard Schölkopf, Dominik Janzing
The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms.
no code implementations • 5 Apr 2017 • Dominik Janzing, Bernhard Schoelkopf
We study a model where one target variable Y is correlated with a vector X:=(X_1,..., X_d) of predictor variables being potential causes of Y.
1 code implementation • 12 May 2015 • Bernhard Schölkopf, David W. Hogg, Dun Wang, Daniel Foreman-Mackey, Dominik Janzing, Carl-Johann Simon-Gabriel, Jonas Peters
We describe a method for removing the effect of confounders in order to reconstruct a latent quantity of interest.
no code implementations • 4 Mar 2015 • Naji Shajarisales, Dominik Janzing, Bernhard Shoelkopf, Michel Besserve
Assuming the effect is generated by the cause trough a linear system, we propose a new approach based on the hypothesis that nature chooses the "cause" and the "mechanism that generates the effect from the cause" independent of each other.
no code implementations • 11 Dec 2014 • Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, Bernhard Schölkopf
We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data.
no code implementations • 14 Nov 2014 • Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf
A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.
no code implementations • 9 Aug 2014 • Joris Mooij, Dominik Janzing, Bernhard Schoelkopf
We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM).
no code implementations • 19 Jun 2014 • Katja Ried, Megan Agnew, Lydia Vermeyden, Dominik Janzing, Robert W. Spekkens, Kevin J. Resch
The problem of using observed correlations to infer causal relations is relevant to a wide variety of scientific disciplines.
no code implementations • 11 Feb 2014 • Dominik Janzing, Bastian Steudel, Naji Shajarisales, Bernhard Schölkopf
Information Geometric Causal Inference (IGCI) is a new approach to distinguish between cause and effect for two variables.
no code implementations • 19 Dec 2013 • Samory Kpotufe, Eleni Sgouritsa, Dominik Janzing, Bernhard Schölkopf
We analyze a family of methods for statistical causal inference from sample under the so-called Additive Noise Model.
no code implementations • NeurIPS 2013 • Jonas Peters, Dominik Janzing, Bernhard Schölkopf
We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo).
no code implementations • 26 Sep 2013 • Jonas Peters, Joris Mooij, Dominik Janzing, Bernhard Schölkopf
We consider the problem of learning causal directed acyclic graphs from an observational joint distribution.
no code implementations • 26 Sep 2013 • Eleni Sgouritsa, Dominik Janzing, Jonas Peters, Bernhard Schoelkopf
We propose a kernel method to identify finite mixtures of nonparametric product distributions.
no code implementations • 30 Apr 2013 • Joris M. Mooij, Dominik Janzing, Bernhard Schölkopf
We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM).
1 code implementation • 27 Jun 2012 • Bernhard Schoelkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, Joris Mooij
We consider the problem of function estimation in the case where an underlying causal model can be inferred.
no code implementations • 29 Mar 2012 • Dominik Janzing, David Balduzzi, Moritz Grosse-Wentrup, Bernhard Schölkopf
Here we propose a set of natural, intuitive postulates that a measure of causal strength should satisfy.
Statistics Theory Statistics Theory
no code implementations • 15 Mar 2012 • Povilas Daniusis, Dominik Janzing, Joris Mooij, Jakob Zscheischler, Bastian Steudel, Kun Zhang, Bernhard Schoelkopf
We consider two variables that are related to each other by an invertible function.
2 code implementations • 14 Feb 2012 • Kun Zhang, Jonas Peters, Dominik Janzing, Bernhard Schoelkopf
Conditional independence testing is an important problem, especially in Bayesian network learning and causal discovery.
no code implementations • NeurIPS 2011 • Joris M. Mooij, Dominik Janzing, Tom Heskes, Bernhard Schölkopf
We study a particular class of cyclic causal models, where each variable is a (possibly nonlinear) function of its parents and additive noise.
no code implementations • NeurIPS 2010 • Oliver Stegle, Dominik Janzing, Kun Zhang, Joris M. Mooij, Bernhard Schölkopf
To this end, we consider the hypothetical effect variable to be a function of the hypothetical cause variable and an independent noise term (not necessarily additive).
no code implementations • NeurIPS 2008 • Patrik O. Hoyer, Dominik Janzing, Joris M. Mooij, Jonas Peters, Bernhard Schölkopf
The discovery of causal relationships between a set of observed variables is a fundamental problem in science.
no code implementations • 23 Apr 2008 • Dominik Janzing, Bernhard Schoelkopf
We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs.