Likelihood-Based Generative Models

VQ-VAE

Introduced by Oord et al. in Neural Discrete Representation Learning

VQ-VAE is a type of variational autoencoder that uses vector quantisation to obtain a discrete latent representation. It differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, ideas from vector quantisation (VQ) are incorporated. Using the VQ method allows the model to circumvent issues of posterior collapse - where the latents are ignored when they are paired with a powerful autoregressive decoder - typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes.

Source: Neural Discrete Representation Learning

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 7 6.60%
Speech Synthesis 7 6.60%
Quantization 6 5.66%
Video Generation 5 4.72%
Music Generation 4 3.77%
Denoising 3 2.83%
Voice Conversion 3 2.83%
Language Modelling 3 2.83%
Disentanglement 3 2.83%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories