Invertible neural networks (INNs) are neural network architectures with invertibility by design.
A key assumption in supervised learning is that training and test data follow the same probability distribution.
Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.
Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator.
We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases.
Approximate Bayesian computation (ABC) is a likelihood-free inference method that has been employed in various applications.
Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation.
We take the structural equations in causal modeling as an example and propose a novel DA method, which is shown to be useful both theoretically and experimentally.
However, this assumption is unrealistic in many instances of PU learning because it fails to capture the existence of a selection bias in the labeling process.
On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion.