A Variational Autoencoder is a type of likelihood-based generative model. It consists of an encoder, that takes in data $x$ as input and transforms this into a latent representation $z$, and a decoder, that takes a latent representation $z$ and returns a reconstruction $\hat{x}$. Inference is performed via variational inference to approximate the posterior of the model.
Source: Auto-Encoding Variational BayesPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 37 | 5.85% |
Disentanglement | 33 | 5.22% |
Denoising | 20 | 3.16% |
Time Series Analysis | 14 | 2.22% |
Anomaly Detection | 13 | 2.06% |
Quantization | 13 | 2.06% |
Image Classification | 13 | 2.06% |
Language Modelling | 12 | 1.90% |
Text Generation | 11 | 1.74% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |