A Simple Method for Commonsense Reasoning

7 Jun 2018  ·  Trieu H. Trinh, Quoc V. Le ·

Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset (Levesque et al., 2011). In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.

PDF Abstract

Datasets


Introduced in the Paper:

CC-Stories

Used in the Paper:

SQuAD WSC
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Common Sense Reasoning Winograd Schema Challenge Ensemble of 14 LMs Score 63.7 # 6
Common Sense Reasoning Winograd Schema Challenge Char-LM-partial Score 57.9 # 10
Common Sense Reasoning Winograd Schema Challenge Word-LM-partial Score 62.6 # 8

Methods


No methods listed for this paper. Add relevant methods here