Learning a parametric model of a data distribution is a well-known statistical problem that has seen renewed interest as it is brought to scale in deep learning.
We introduce a new general identifiable framework for principled disentanglement referred to as Structured Nonlinear Independent Component Analysis (SNICA).
We posit that autoregressive flow models are well-suited to performing a range of causal inference tasks - ranging from causal discovery to making interventional and counterfactual predictions.
We consider the problem of inferring causal relationships between two or more passively observed variables.
Kernel density is viewed symbolically as $X\rightharpoonup Y$ where the random variable $X$ is smoothed to $Y= X+N(0,\sigma^2 I_d)$, and empirical Bayes is the machinery to denoise in a least-squares sense, which we express as $X \leftharpoondown Y$.
Here, we propose a general framework for nonlinear ICA, which, as a special case, can make use of temporal structure.
Then, based on the observation that conventional classification learning with neural networks is implicitly assuming an exponential family as a generative model, we introduce a method for clustering unlabeled data by estimating a finite mixture of distributions in an exponential family.
Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable.
Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables.
Model selection based on classical information criteria, such as BIC, is generally computationally demanding, but its properties are well studied.
In this paper, we propose a new algorithm for learning causal orders that is robust against one typical violation of the model assumptions: latent confounders.