An Autoencoder is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).
Image: Michael Massi
Source: Reducing the Dimensionality of Data with Neural NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Decoder | 41 | 5.97% |
Anomaly Detection | 34 | 4.95% |
Denoising | 28 | 4.08% |
Self-Supervised Learning | 24 | 3.49% |
Image Generation | 20 | 2.91% |
Dimensionality Reduction | 20 | 2.91% |
Semantic Segmentation | 14 | 2.04% |
Diversity | 14 | 2.04% |
Quantization | 13 | 1.89% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |