A Sparse Autoencoder is a type of autoencoder that employs sparsity to achieve an information bottleneck. Specifically the loss function is constructed so that activations are penalized within a layer. The sparsity constraint can be imposed with L1 regularization or a KL divergence between expected average neuron activation to an ideal distribution $p$.
Image: Jeff Jordan. Read his blog post (click) for a detailed summary of autoencoders.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
General Classification | 4 | 12.50% |
Denoising | 2 | 6.25% |
Small Data Image Classification | 2 | 6.25% |
Image Retrieval | 1 | 3.13% |
Information Retrieval | 1 | 3.13% |
Quantization | 1 | 3.13% |
Point Cloud Completion | 1 | 3.13% |
Point cloud reconstruction | 1 | 3.13% |
Point Set Upsampling | 1 | 3.13% |