Search Results for author: Ilyes Khemakhem

Found 6 papers, 4 papers with code

Nonlinear Independent Component Analysis for Principled Disentanglement in Unsupervised Deep Learning

no code implementations29 Mar 2023 Aapo Hyvarinen, Ilyes Khemakhem, Hiroshi Morioka

A central problem in unsupervised deep learning is how to find useful representations of high-dimensional data, sometimes called "disentanglement".

Disentanglement

Identifiability of latent-variable and structural-equation models: from linear to nonlinear

no code implementations6 Feb 2023 Aapo Hyvärinen, Ilyes Khemakhem, Ricardo Monti

An old problem in multivariate statistics is that linear Gaussian models are often unidentifiable, i. e. some parameters cannot be uniquely estimated.

Time Series Time Series Analysis

Causal Autoregressive Flows

2 code implementations4 Nov 2020 Ilyes Khemakhem, Ricardo Pio Monti, Robert Leech, Aapo Hyvärinen

We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks, ranging from causal discovery to making interventional and counterfactual predictions.

Causal Discovery Causal Inference +1

Autoregressive flow-based causal discovery and inference

2 code implementations18 Jul 2020 Ricardo Pio Monti, Ilyes Khemakhem, Aapo Hyvarinen

We posit that autoregressive flow models are well-suited to performing a range of causal inference tasks - ranging from causal discovery to making interventional and counterfactual predictions.

Causal Discovery Causal Inference +1

ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA

1 code implementation NeurIPS 2020 Ilyes Khemakhem, Ricardo Pio Monti, Diederik P. Kingma, Aapo Hyvärinen

We consider the identifiability theory of probabilistic models and establish sufficient conditions under which the representations learned by a very broad family of conditional energy-based models are unique in function space, up to a simple transformation.

Transfer Learning

Variational Autoencoders and Nonlinear ICA: A Unifying Framework

2 code implementations10 Jul 2019 Ilyes Khemakhem, Diederik P. Kingma, Ricardo Pio Monti, Aapo Hyvärinen

We address this issue by showing that for a broad family of deep latent-variable models, identification of the true joint distribution over observed and latent variables is actually possible up to very simple transformations, thus achieving a principled and powerful form of disentanglement.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.