1 code implementation • 13 Jun 2022 • Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves
Recent work has shown that one can reconstruct the causal variables from temporal sequences of observations under the assumption that there are no instantaneous causal relations between them.
no code implementations • 30 Mar 2022 • Fan Feng, Biwei Huang, Kun Zhang, Sara Magliacane
Dealing with non-stationarity in environments (e. g., in the transition dynamics) and objectives (e. g., in the reward functions) is a challenging problem that is crucial in real-world applications of reinforcement learning (RL).
1 code implementation • 7 Feb 2022 • Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves
Understanding the latent causal factors of a dynamical system from visual observations is considered a crucial step towards agents reasoning in complex environments.
1 code implementation • ICLR 2022 • Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, Kun Zhang
We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided.
1 code implementation • NeurIPS 2020 • Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
Most existing works focus on \textit{worst-case} or \textit{average-case} lower bounds for the number of interventions required to orient a DAG.
3 code implementations • 1 Nov 2020 • Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
Most existing works focus on worst-case or average-case lower bounds for the number of interventions required to orient a DAG.
1 code implementation • 2 Jul 2020 • Nathan Hunt, Nathan Fulton, Sara Magliacane, Nghia Hoang, Subhro Das, Armando Solar-Lezama
We also prove that our method of enforcing the safety constraints preserves all safe policies from the original environment.
no code implementations • NeurIPS 2019 • Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix Adsera, Guy Bresler
We consider the problem of experimental design for learning causal graphs that have a tree structure.
no code implementations • 18 Oct 2018 • Tineke Blom, Anna Klimovskaia, Sara Magliacane, Joris M. Mooij
Causal discovery algorithms infer causal relations from data based on several assumptions, including notably the absence of measurement error.
1 code implementation • NeurIPS 2018 • Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, Joris M. Mooij
An important goal common to domain adaptation and causal inference is to make accurate predictions when the distributions for the source (or training) domain(s) and target (or test) domain(s) differ.
no code implementations • 30 Nov 2016 • Joris M. Mooij, Sara Magliacane, Tom Claassen
We explain how several well-known causal discovery algorithms can be seen as addressing special cases of the JCI framework, and we also propose novel implementations that extend existing causal discovery methods for purely observational data to the JCI setting.
1 code implementation • NeurIPS 2016 • Sara Magliacane, Tom Claassen, Joris M. Mooij
Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions.