Behind the Scenes of an Evolving Event Cloze Test

WS 2017  ·  Nathanael Chambers ·

This paper analyzes the narrative event cloze test and its recent evolution. The test removes one event from a document{'}s chain of events, and systems predict the missing event. Originally proposed to evaluate learned knowledge of event scenarios (e.g., scripts and frames), most recent work now builds ngram-like language models (LM) to beat the test. This paper argues that the test has slowly/unknowingly been altered to accommodate LMs.5 Most notably, tests are auto-generated rather than by hand, and no effort is taken to include core script events. Recent work is not clear on evaluation goals and contains contradictory results. We implement several models, and show that the test{'}s bias to high-frequency events explains the inconsistencies. We conclude with recommendations on how to return to the test{'}s original intent, and offer brief suggestions on a path forward.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here