What Knowledge is Needed to Solve the RTE5 Textual Entailment Challenge?

10 Jun 2018  ·  Peter Clark ·

This document gives a knowledge-oriented analysis of about 20 interesting Recognizing Textual Entailment (RTE) examples, drawn from the 2005 RTE5 competition test set. The analysis ignores shallow statistical matching techniques between T and H, and rather asks: What would it take to reasonably infer that T implies H? What world knowledge would be needed for this task? Although such knowledge-intensive techniques have not had much success in RTE evaluations, ultimately an intelligent system should be expected to know and deploy this kind of world knowledge required to perform this kind of reasoning. The selected examples are typically ones which our RTE system (called BLUE) got wrong and ones which require world knowledge to answer. In particular, the analysis covers cases where there was near-perfect lexical overlap between T and H, yet the entailment was NO, i.e., examples that most likely all current RTE systems will have got wrong. A nice example is #341 (page 26), that requires inferring from "a river floods" that "a river overflows its banks". Seems it should be easy, right? Enjoy!

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods