no code implementations • 14 Mar 2024 • Davide Talon, Phillip Lippe, Stuart James, Alessio Del Bue, Sara Magliacane
Causal Representation Learning (CRL) aims at identifying high-level causal factors and their relationships from high-dimensional observations, e. g., images.
no code implementations • 13 Mar 2024 • Danru Xu, Dingling Yao, Sébastien Lachapelle, Perouz Taslakian, Julius von Kügelgen, Francesco Locatello, Sara Magliacane
Causal representation learning aims at identifying high-level causal variables from perceptual data.
1 code implementation • 7 Nov 2023 • Dingling Yao, Danru Xu, Sébastien Lachapelle, Sara Magliacane, Perouz Taslakian, Georg Martius, Julius von Kügelgen, Francesco Locatello
We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as different data modalities.
1 code implementation • 16 Jun 2023 • Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves
Identifying the causal variables of an environment and how to intervene on them is of core value in applications such as robotics and embodied AI.
1 code implementation • 1 Jun 2023 • Yongtuo Liu, Sara Magliacane, Miltiadis Kofinas, Efstratios Gavves
Dynamical systems with complex behaviours, e. g. immune system cells interacting with a pathogen, are commonly modelled by splitting the behaviour into different regimes, or modes, each with simpler dynamics, and then learning the switching behaviour from one mode to another.
1 code implementation • NeurIPS 2023 • Ilze Amanda Auzina, Çağatay Yıldız, Sara Magliacane, Matthias Bethge, Efstratios Gavves
Neural ordinary differential equations (NODEs) have been proven useful for learning non-linear dynamics of arbitrary trajectories.
1 code implementation • 13 Jun 2022 • Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves
To address this issue, we propose iCITRIS, a causal representation learning method that allows for instantaneous effects in intervened temporal sequences when intervention targets can be observed, e. g., as actions of an agent.
no code implementations • 30 Mar 2022 • Fan Feng, Biwei Huang, Kun Zhang, Sara Magliacane
Dealing with non-stationarity in environments (e. g., in the transition dynamics) and objectives (e. g., in the reward functions) is a challenging problem that is crucial in real-world applications of reinforcement learning (RL).
1 code implementation • 7 Feb 2022 • Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves
Understanding the latent causal factors of a dynamical system from visual observations is considered a crucial step towards agents reasoning in complex environments.
1 code implementation • ICLR 2022 • Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, Kun Zhang
We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided.
1 code implementation • NeurIPS 2020 • Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
Most existing works focus on \textit{worst-case} or \textit{average-case} lower bounds for the number of interventions required to orient a DAG.
4 code implementations • 1 Nov 2020 • Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, Karthikeyan Shanmugam
Most existing works focus on worst-case or average-case lower bounds for the number of interventions required to orient a DAG.
1 code implementation • 2 Jul 2020 • Nathan Hunt, Nathan Fulton, Sara Magliacane, Nghia Hoang, Subhro Das, Armando Solar-Lezama
We also prove that our method of enforcing the safety constraints preserves all safe policies from the original environment.
no code implementations • NeurIPS 2019 • Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix Adsera, Guy Bresler
We consider the problem of experimental design for learning causal graphs that have a tree structure.
no code implementations • 18 Oct 2018 • Tineke Blom, Anna Klimovskaia, Sara Magliacane, Joris M. Mooij
Causal discovery algorithms infer causal relations from data based on several assumptions, including notably the absence of measurement error.
1 code implementation • NeurIPS 2018 • Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, Joris M. Mooij
An important goal common to domain adaptation and causal inference is to make accurate predictions when the distributions for the source (or training) domain(s) and target (or test) domain(s) differ.
no code implementations • 30 Nov 2016 • Joris M. Mooij, Sara Magliacane, Tom Claassen
We explain how several well-known causal discovery algorithms can be seen as addressing special cases of the JCI framework, and we also propose novel implementations that extend existing causal discovery methods for purely observational data to the JCI setting.
1 code implementation • NeurIPS 2016 • Sara Magliacane, Tom Claassen, Joris M. Mooij
Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions.