Argument generation is a challenging task whose research is timely considering its potential impact on social media and the dissemination of information.
The growing interest in argument mining and computational argumentation brings with it a plethora of Natural Language Understanding (NLU) tasks and corresponding datasets.
One of the most impressive human endeavors of the past two decades is the collection and categorization of human knowledge in the free and accessible format that is Wikipedia.
Extraction of financial and economic events from text has previously been done mostly using rule-based methods, with more recent works employing machine learning techniques.
no code implementations • 25 Nov 2019 • Liat Ein-Dor, Eyal Shnarch, Lena Dankin, Alon Halfon, Benjamin Sznajder, Ariel Gera, Carlos Alzate, Martin Gleize, Leshem Choshen, Yufang Hou, Yonatan Bilu, Ranit Aharonov, Noam Slonim
One of the main tasks in argument mining is the retrieval of argumentative content pertaining to a given topic.
Recent advancements in machine reading and listening comprehension involve the annotation of long texts.
In Natural Language Understanding, the task of response generation is usually focused on responses to short texts, such as tweets or a turn in a dialog.
In this work we aim to explicitly define a taxonomy of such principled recurring arguments, and, given a controversial topic, to automatically identify which of these arguments are relevant to the topic.
With the growing interest in social applications of Natural Language Processing and Computational Argumentation, a natural question is how controversial a given concept is.
To this end, we collected a large dataset of $400$ speeches in English discussing $200$ controversial topics, mined claims for each topic, and asked annotators to identify the mined claims mentioned in each speech.
When debating a controversial topic, it is often desirable to expand the boundaries of discussion.
We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset.
Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation.
no code implementations • • Noam Slonim, Ehud Aharoni, Carlos Alzate, Roy Bar-Haim, Yonatan Bilu, Lena Dankin, Iris Eiron, Daniel Hershcovich, Shay Hummel, Mitesh Khapra, Tamar Lavee, Ran Levy, Paul Matchen, Anatoly Polnarov, Vikas Raykar, Ruty Rinott, Amrita Saha, Naama Zwerdling, David Konopnicki, Dan Gutfreund