SentenceMIM: A Latent Variable Language Model

18 Feb 2020  Β·  Micha Livne, Kevin Swersky, David J. Fleet Β·

SentenceMIM is a probabilistic auto-encoder for language data, trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (i.e., similar to VAE). Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths. We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning, without fine-tuning, outperforming VAE and AE with similar architectures.

PDF Abstract

Results from the Paper


 Ranked #1 on Question Answering on YahooCQA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Question Answering YahooCQA sMIM (1024) P@1 0.683 # 2
MRR 0.818 # 2
Question Answering YahooCQA sMIM (1024) + P@1 0.757 # 1
MRR 0.863 # 1

Methods


AE β€’ MIM β€’ VAE