Semi-supervised Sequence Learning

NeurIPS 2015 Andrew M. Dai • Quoc V. Le

The first approach is to predict what comes next in a sequence, which is a conventional language model in natural language processing. The second approach is to use a sequence autoencoder, which reads the input sequence into a vector and predicts the input sequence again. In our experiments, we find that long short term memory recurrent networks after being pretrained with the two approaches are more stable and generalize better.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.