no code implementations • 29 Mar 2023 • Aapo Hyvarinen, Ilyes Khemakhem, Hiroshi Morioka
A central problem in unsupervised deep learning is how to find useful representations of high-dimensional data, sometimes called "disentanglement".
no code implementations • 6 Feb 2023 • Aapo Hyvärinen, Ilyes Khemakhem, Ricardo Monti
An old problem in multivariate statistics is that linear Gaussian models are often unidentifiable, i. e. some parameters cannot be uniquely estimated.
2 code implementations • 4 Nov 2020 • Ilyes Khemakhem, Ricardo Pio Monti, Robert Leech, Aapo Hyvärinen
We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks, ranging from causal discovery to making interventional and counterfactual predictions.
2 code implementations • 18 Jul 2020 • Ricardo Pio Monti, Ilyes Khemakhem, Aapo Hyvarinen
We posit that autoregressive flow models are well-suited to performing a range of causal inference tasks - ranging from causal discovery to making interventional and counterfactual predictions.
1 code implementation • NeurIPS 2020 • Ilyes Khemakhem, Ricardo Pio Monti, Diederik P. Kingma, Aapo Hyvärinen
We consider the identifiability theory of probabilistic models and establish sufficient conditions under which the representations learned by a very broad family of conditional energy-based models are unique in function space, up to a simple transformation.
2 code implementations • 10 Jul 2019 • Ilyes Khemakhem, Diederik P. Kingma, Ricardo Pio Monti, Aapo Hyvärinen
We address this issue by showing that for a broad family of deep latent-variable models, identification of the true joint distribution over observed and latent variables is actually possible up to very simple transformations, thus achieving a principled and powerful form of disentanglement.