Hokey Pokey Causal Discovery: Using Deep Learning Model Errors to Learn Causal Structure

1 Jan 2021  ·  Emily Saldanha, Dustin Arendt, Svitlana Volkova ·

While machine learning excels at learning predictive models from observational data, learning the causal mechanisms behind the observed phenomena presents the significant challenge of distinguishing true causal relationships from confounding and other potential sources of spurious correlations. Many existing algorithms for the discovery of causal structure from observational data rely on evaluating the conditional independence relationships among features to account for the effects of confounding. However, the choice of independence tests for these algorithms often rely on assumptions regarding the data distributions and type of causal relationships. To avoid these assumptions, we develop a novel deep learning approach, dubbed the Hokey Pokey model, to indirectly explore the conditional dependencies among a set of variables by rapidly comparing predictive errors given different combinations of input variables. We then use the results of this comparison as a predictive signal for causal relationships among the variables. We conduct rigorous experiments to evaluate model robustness and generalizability using generated datasets with known underlying causal relationships and analyze the capacity of model error comparisons to provide a predictive signal for causal structure. Our model outperforms commonly used baseline models (PC and GES) and is capable of discovering causal relationships of different complexity (graph size, density and structure) in both binary and continuous data.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here