no code implementations • EACL (BEA) 2021 • Tariq Alhindi, Debanjan Ghosh
Argument mining is often addressed by a pipeline method where segmentation of text into argumentative units is conducted first and proceeded by an argument component identification task.
1 code implementation • *SEM (NAACL) 2022 • Lingyu Gao, Debanjan Ghosh, Kevin Gimpel
We propose a type-controlled framework for inquisitive question generation.
no code implementations • 28 Nov 2022 • Kevin Stowe, Debanjan Ghosh, Mengxuan Zhao
This work aims to employ natural language generation (NLG) to rapidly generate items for English language learning applications: this requires both language models capable of generating fluent, high-quality English, and to control the output of the generation to match the requirements of the relevant items.
no code implementations • 28 Oct 2022 • Sophia Chan, Swapna Somasundaran, Debanjan Ghosh, Mengxuan Zhao
We describe the AGReE system, which takes user-submitted passages as input and automatically generates grammar practice exercises that can be completed while reading.
1 code implementation • 24 May 2022 • Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan
Figurative language understanding has been recently framed as a recognizing textual entailment (RTE) task (a. k. a.
no code implementations • 17 May 2022 • Lingyu Gao, Debanjan Ghosh, Kevin Gimpel
We propose a type-controlled framework for inquisitive question generation.
1 code implementation • Findings (ACL) 2021 • Tuhin Chakrabarty, Debanjan Ghosh, Adam Poliak, Smaranda Muresan
We introduce a collection of recognizing textual entailment (RTE) datasets focused on figurative language.
1 code implementation • EACL 2021 • Debanjan Ghosh, Ritvik Shrivastava, Smaranda Muresan
We exploit joint modeling in terms of (a) applying discrete features that are useful in detecting sarcasm to the task of argumentative relation classification (agree/disagree/none), and (b) multitask learning for argumentative relation classification and sarcasm detection using deep learning architectures (e. g., dual Long Short-Term Memory (LSTM) with hierarchical attention and Transformer-based architectures).
no code implementations • 8 Mar 2021 • Tariq Alhindi, Debanjan Ghosh
Argument mining is often addressed by a pipeline method where segmentation of text into argumentative units is conducted first and proceeded by an argument component identification task.
1 code implementation • 26 Jan 2021 • Debanjan Ghosh, Ritvik Shrivastava, Smaranda Muresan
We exploit joint modeling in terms of (a) applying discrete features that are useful in detecting sarcasm to the task of argumentative relation classification (agree/disagree/none), and (b) multitask learning for argumentative relation classification and sarcasm detection using deep learning architectures (e. g., dual Long Short-Term Memory (LSTM) with hierarchical attention and Transformer-based architectures).
no code implementations • ACL 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smar Muresan, a, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
no code implementations • WS 2020 • Debanjan Ghosh, Beata Beigman Klebanov, Yi Song
We present a computational exploration of argument critique writing by young students.
no code implementations • 20 May 2020 • Vikram Ramanarayanan, Matthew Mulholland, Debanjan Ghosh
An important step towards enabling English language learners to improve their conversational speaking proficiency involves automated scoring of multiple aspects of interactional competence and subsequent targeted feedback.
no code implementations • WS 2020 • Debanjan Ghosh, Avijit Vajpayee, Smaranda Muresan
Detecting sarcasm and verbal irony is critical for understanding people's actual sentiments and beliefs.
1 code implementation • 28 Apr 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
no code implementations • 3 Nov 2019 • Debanjan Ghosh, Elena Musi, Kartikeya Upasani, Smaranda Muresan
Human communication often involves the use of verbal irony or sarcasm, where the speakers usually mean the opposite of what they say.
no code implementations • CL 2018 • Debanjan Ghosh, Alexander R. Fabbri, Smaranda Muresan
To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the current turn.
no code implementations • 8 Jun 2018 • Elena Musi, Debanjan Ghosh, Smaranda Muresan
Drawing from a theoretically-informed typology of concessions, we conduct an annotation task to label a set of polysemous lexical markers as introducing an argumentative concession or not and we observe their distribution in threads that achieved and did not achieve persuasion.
no code implementations • 14 Apr 2018 • Debanjan Ghosh, Smaranda Muresan
Conversations in social media often contain the use of irony or sarcasm, when the users say the opposite of what they really mean.
2 code implementations • WS 2017 • Debanjan Ghosh, Alexander Richard Fabbri, Smaranda Muresan
To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the sarcastic response.