The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

7 Nov 2015Felix Hill • Antoine Bordes • Sumit Chopra • Jason Weston

We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.