1 code implementation • 12 Nov 2024 • Jack Brady, Julius von Kügelgen, Sébastien Lachapelle, Simon Buchholz, Thomas Kipf, Wieland Brendel
Using this formalism, we prove that interaction asymmetry enables both disentanglement and compositional generalization.
no code implementations • 30 Oct 2024 • Emanuele Marconato, Sébastien Lachapelle, Sebastian Weichwald, Luigi Gresele
We analyze identifiability as a possible explanation for the ubiquity of linear properties across language models, such as the vector difference between the representations of "easy" and "easiest" being parallel to that between "lucky" and "luckiest".
1 code implementation • 9 Oct 2024 • Philippe Brouillard, Sébastien Lachapelle, Julia Kaltenborn, Yaniv Gurwicz, Dhanya Sridhar, Alexandre Drouin, Peer Nowack, Jakob Runge, David Rolnick
From these, one needs to learn both a mapping to causally-relevant latent variables, such as a high-level representation of the El Ni\~no phenomenon and other processes, as well as the causal model over them.
no code implementations • 30 May 2024 • Elliot Layne, Jason Hartford, Sébastien Lachapelle, Mathieu Blanchette, Dhanya Sridhar
The key insight is that the mapping from latent variables driven by gene expression to the phenotype of interest changes sparsely across closely related environments.
1 code implementation • 13 Mar 2024 • Danru Xu, Dingling Yao, Sébastien Lachapelle, Perouz Taslakian, Julius von Kügelgen, Francesco Locatello, Sara Magliacane
Causal representation learning aims at identifying high-level causal variables from perceptual data.
1 code implementation • 10 Jan 2024 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
We develop a nonparametric identifiability theory that formalizes this principle and shows that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
1 code implementation • 7 Nov 2023 • Dingling Yao, Danru Xu, Sébastien Lachapelle, Sara Magliacane, Perouz Taslakian, Georg Martius, Julius von Kügelgen, Francesco Locatello
We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as different data modalities.
1 code implementation • 26 Nov 2022 • Sébastien Lachapelle, Tristan Deleu, Divyat Mahajan, Ioannis Mitliagkas, Yoshua Bengio, Simon Lacoste-Julien, Quentin Bertrand
Although disentangled representations are often said to be beneficial for downstream tasks, current empirical and theoretical understanding is limited.
no code implementations • 15 Jul 2022 • Sébastien Lachapelle, Simon Lacoste-Julien
In this work, we introduce a generalization of this theory which applies to any ground-truth graph and specifies qualitatively how disentangled the learned representation is expected to be, via a new equivalence relation over models we call consistency.
1 code implementation • 21 Jul 2021 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables.
1 code implementation • 23 Nov 2020 • Ignavier Ng, Sébastien Lachapelle, Nan Rosemary Ke, Simon Lacoste-Julien, Kun Zhang
Recently, structure learning of directed acyclic graphs (DAGs) has been formulated as a continuous optimization problem by leveraging an algebraic characterization of acyclicity.
1 code implementation • NeurIPS 2020 • Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, Alexandre Drouin
This work constitutes a new step in this direction by proposing a theoretically-grounded method based on neural networks that can leverage interventional data.
1 code implementation • ICLR 2020 • Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, Simon Lacoste-Julien
We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data.
2 code implementations • ICLR 2020 • Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Rosemary Ke, Sébastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, Christopher Pal
We show that causal structures can be parameterized via continuous variables and learned end-to-end.
no code implementations • 22 Jan 2019 • Eric Larsen, Sébastien Lachapelle, Yoshua Bengio, Emma Frejinger, Simon Lacoste-Julien, Andrea Lodi
We formulate the problem as a two-stage optimal prediction stochastic program whose solution we predict with a supervised machine learning algorithm.
no code implementations • 31 Jul 2018 • Eric Larsen, Sébastien Lachapelle, Yoshua Bengio, Emma Frejinger, Simon Lacoste-Julien, Andrea Lodi
We aim to predict at a high speed the expected TDOS associated with the second stage problem, conditionally on the first stage variables.