An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name.
Extracted from: Wikipedia
Image source: Wikipedia
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Anomaly Detection | 35 | 8.77% |
Decoder | 27 | 6.77% |
Denoising | 12 | 3.01% |
Dimensionality Reduction | 11 | 2.76% |
Unsupervised Anomaly Detection | 10 | 2.51% |
Clustering | 10 | 2.51% |
Deep Learning | 10 | 2.51% |
Quantization | 7 | 1.75% |
Image Generation | 7 | 1.75% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |