Probabilistic Semantic Embedding

ICLR 2019  ·  Yue Jiao, Jonathon Hare, Adam Prügel-Bennett ·

We present an extension of a variational auto-encoder that creates semantically richcoupled probabilistic latent representations that capture the semantics of multiplemodalities of data. We demonstrate this model through experiments using imagesand textual descriptors as inputs and images as outputs. Our latent representationsare not only capable of driving a decoder to generate novel data, but can also be useddirectly for annotation or classification. Using the MNIST and Fashion-MNISTdatasets we show that the embedding not only provides better reconstruction andclassification performance than the current state-of-the-art, but it also allows us toexploit the semantic content of the pretrained word embedding spaces to do taskssuch as image generation from labels outside of those seen during training.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here