To move towards solving the fallacy recognition task, we approach these differences across datasets as multiple tasks and show how instruction-based prompting in a multitask setup based on the T5 model improves the results against approaches built for a specific dataset such as T5, BERT or GPT-3.
Human communication often involves the use of verbal irony or sarcasm, where the speakers usually mean the opposite of what they say.
We present a unique dataset of student source-based argument essays to facilitate research on the relations between content, argumentation skills, and assessment.
Drawing from a theoretically-informed typology of concessions, we conduct an annotation task to label a set of polysemous lexical markers as introducing an argumentative concession or not and we observe their distribution in threads that achieved and did not achieve persuasion.
Argumentative text has been analyzed both theoretically and computationally in terms of argumentative structure that consists of argument components (e. g., claims, premises) and their argumentative relations (e. g., support, attack).