Generative Models

Adversarial Latent Autoencoder

Introduced by Pidhorskyi et al. in Adversarial Latent Autoencoders

ALAE, or Adversarial Latent Autoencoder, is a type of autoencoder that attempts to overcome some of the limitations of generative adversarial networks. The architecture allows the latent distribution to be learned from data to address entanglement (A). The output data distribution is learned with an adversarial strategy (B). Thus, we retain the generative properties of GANs, as well as the ability to build on the recent advances in this area. For instance, we can include independent sources of stochasticity, which have proven essential for generating image details, or can leverage recent improvements on GAN loss functions, regularization, and hyperparameters tuning. Finally, to implement (A) and (B), AE reciprocity is imposed in the latent space (C). Therefore, we can avoid using reconstruction losses based on simple $\mathcal{l}){2}$ norm that operates in data space, where they are often suboptimal, like for the image space. Since it works on the latent space, rather than autoencoding the data space, the approach is named Adversarial Latent Autoencoder (ALAE).

Source: Adversarial Latent Autoencoders

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Disentanglement 1 50.00%
Image Generation 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories