NICE, or Non-Linear Independent Components Estimation is a framework for modeling complex high-dimensional densities. It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. The transformation is parameterised so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet it maintains the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood. The transformation used in NICE is the affine coupling layer without the scale term, known as additive coupling layer:
$$ y_{I_{2}} = x_{I_{2}} + m\left(x_{I_{1}}\right) $$
$$ x_{I_{2}} = y_{I_{2}} + m\left(y_{I_{1}}\right) $$
Source: NICE: Non-linear Independent Components EstimationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Graph Generation | 2 | 12.50% |
Scene Graph Generation | 2 | 12.50% |
Visual Grounding | 1 | 6.25% |
Text Generation | 1 | 6.25% |
Fairness | 1 | 6.25% |
Image Captioning | 1 | 6.25% |
Image Registration | 1 | 6.25% |
Entity Disambiguation | 1 | 6.25% |
Out-of-Distribution Detection | 1 | 6.25% |
Component | Type |
|
---|---|---|
Affine Coupling
|
Bijective Transformation | |
Normalizing Flows
|
Distribution Approximation |