Engaging in a live debate requires a diverse set of skills, and Project Debater has been developed accordingly as a collection of components, each designed to perform a specific subtask.
Previous work on review summarization focused on measuring the sentiment toward the main aspects of the reviewed product or business, or on creating a textual summary.
Recent work has proposed to summarize arguments by mapping them to a small set of expert-generated key points, where the salience of each key point corresponds to the number of its matching arguments.
Generating a concise summary from a large collection of arguments on a given topic is an intriguing yet understudied problem.
In Natural Language Understanding, the task of response generation is usually focused on responses to short texts, such as tweets or a turn in a dialog.
To this end, we collected a large dataset of $400$ speeches in English discussing $200$ controversial topics, mined claims for each topic, and asked annotators to identify the mined claims mentioned in each speech.
We also present a spellchecker created for this task which outperforms standard spellcheckers tested on the task of spellchecking.
Ranked #2 on Grammatical Error Correction on BEA-2019 (test)
We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset.