no code implementations • LREC 2022 • Jennifer Tracey, Owen Rambow, Claire Cardie, Adam Dalton, Hoa Trang Dang, Mona Diab, Bonnie Dorr, Louise Guthrie, Magdalena Markowska, Smaranda Muresan, Vinodkumar Prabhakaran, Samira Shaikh, Tomek Strzalkowski
We present the BeSt corpus, which records cognitive state: who believes what (i. e., factuality), and who has what sentiment towards what.
no code implementations • EMNLP 2020 • Ramy Eskander, Smaranda Muresan, Michael Collins
Our approach innovates in three ways: 1) a robust approach of selecting training instances via cross-lingual annotation projection that exploits best practices of unsupervised type and token constraints, word-alignment confidence and density of projected POS, 2) a Bi-LSTM architecture that uses contextualized word embeddings, affix embeddings and hierarchical Brown clusters, and 3) an evaluation on 12 diverse languages in terms of language family and morphological typology.
1 code implementation • SIGDIAL (ACL) 2021 • Tariq Alhindi, Brennan McManus, Smaranda Muresan
We discuss the connection between argument structure and check-worthy statements and develop several baseline models for detecting check-worthy statements in the climate change domain.
no code implementations • EMNLP 2021 • Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan
Moreover, multilingual fine-tuning on poetic data outperforms bilingual fine-tuning on poetic data.
1 code implementation • NAACL 2022 • Ramy Eskander, Cass Lowry, Sujay Khandagale, Judith Klavans, Maria Polinsky, Smaranda Muresan
Our results show that the stem-based approach improves the POS models for all the target languages, with an average relative error reduction of 10. 3% in accuracy per target language, and outperforms the word-based approach that operates on three-times more data for about two thirds of the language pairs we consider.
1 code implementation • 24 May 2023 • Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan
We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6, 476 visual metaphors for 1, 540 linguistic metaphors and their associated visual elaborations.
1 code implementation • 23 May 2023 • Sky CH-Wang, Arkadiy Saakyan, Oliver Li, Zhou Yu, Smaranda Muresan
Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3, 069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations.
no code implementations • 19 May 2023 • Robert Vacareanu, Siddharth Varia, Kishaloy Halder, Shuai Wang, Giovanni Paolini, Neha Anna John, Miguel Ballesteros, Smaranda Muresan
We explore how weak supervision on abundant unlabeled data can be leveraged to improve few-shot performance in aspect-based sentiment analysis (ABSA) tasks.
no code implementations • 24 Jan 2023 • Tariq Alhindi, Tuhin Chakrabarty, Elena Musi, Smaranda Muresan
To move towards solving the fallacy recognition task, we approach these differences across datasets as multiple tasks and show how instruction-based prompting in a multitask setup based on the T5 model improves the results against approaches built for a specific dataset such as T5, BERT or GPT-3.
1 code implementation • 20 Oct 2022 • Tuhin Chakrabarty, Justin Lewis, Smaranda Muresan
Recent work on question generation has largely focused on factoid questions such as who, what, where, when about basic facts.
1 code implementation • 17 Oct 2022 • Sky CH-Wang, Evan Li, Oliver Li, Smaranda Muresan, Zhou Yu
Affective responses to music are highly personal.
no code implementations • 16 Oct 2022 • Yi R. Fung, Tuhin Chakraborty, Hao Guo, Owen Rambow, Smaranda Muresan, Heng Ji
Norm discovery is important for understanding and reasoning about the acceptable behaviors and potential violations in human communication and interactions.
Cultural Vocal Bursts Intensity Prediction
Language Modelling
no code implementations • 12 Oct 2022 • Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Anna John, Rishita Anubhai, Smaranda Muresan, Dan Roth
Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts: aspect term, aspect category, opinion term, and sentiment polarity.
1 code implementation • 24 May 2022 • Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan
Figurative language understanding has been recently framed as a recognizing textual entailment (RTE) task (a. k. a.
1 code implementation • 24 May 2022 • Thomas Scialom, Tuhin Chakrabarty, Smaranda Muresan
In spite of the limited success of Continual Learning we show that Language Models can be continual learners.
1 code implementation • EMNLP 2021 • Tuhin Chakrabarty, Aadit Trivedi, Smaranda Muresan
Enthymemes are defined as arguments where a premise or conclusion is left implicit.
1 code implementation • 7 Sep 2021 • Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan
Moreover, multilingual fine-tuning on poetic data outperforms \emph{bilingual} fine-tuning on poetic data.
no code implementations • Findings (ACL) 2021 • Elsbeth Turcan, Shuai Wang, Rishita Anubhai, Kasturi Bhattacharjee, Yaser Al-Onaizan, Smaranda Muresan
Detecting what emotions are expressed in text is a well-studied problem in natural language processing.
1 code implementation • ACL 2021 • Arkadiy Saakyan, Tuhin Chakrabarty, Smaranda Muresan
The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence.
1 code implementation • ACL 2021 • Chenghao Yang, Yudong Zhang, Smaranda Muresan
Social media has become a valuable resource for the study of suicidal ideation and the assessment of suicide risk.
1 code implementation • ACL 2021 • Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, Iryna Gurevych
Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions.
1 code implementation • Findings (ACL) 2021 • Tuhin Chakrabarty, Debanjan Ghosh, Adam Poliak, Smaranda Muresan
We introduce a collection of recognizing textual entailment (RTE) datasets focused on figurative language.
1 code implementation • NAACL 2021 • Elsbeth Turcan, Smaranda Muresan, Kathleen McKeown
The problem of detecting psychological stress in online posts, and more broadly, of detecting people in distress or in need of help, is a sensitive application for which the ability to interpret models is vital.
1 code implementation • EACL 2021 • Debanjan Ghosh, Ritvik Shrivastava, Smaranda Muresan
We exploit joint modeling in terms of (a) applying discrete features that are useful in detecting sarcasm to the task of argumentative relation classification (agree/disagree/none), and (b) multitask learning for argumentative relation classification and sarcasm detection using deep learning architectures (e. g., dual Long Short-Term Memory (LSTM) with hierarchical attention and Transformer-based architectures).
no code implementations • NAACL 2021 • Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan
Framing involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman 1983).
1 code implementation • NAACL 2021 • Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, Nanyun Peng
Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning.
1 code implementation • 26 Jan 2021 • Debanjan Ghosh, Ritvik Shrivastava, Smaranda Muresan
We exploit joint modeling in terms of (a) applying discrete features that are useful in detecting sarcasm to the task of argumentative relation classification (agree/disagree/none), and (b) multitask learning for argumentative relation classification and sarcasm detection using deep learning architectures (e. g., dual Long Short-Term Memory (LSTM) with hierarchical attention and Transformer-based architectures).
no code implementations • 25 Jan 2021 • Thamar Solorio, Mahsa Shafaei, Christos Smailis, Mona Diab, Theodore Giannakopoulos, Heng Ji, Yang Liu, Rada Mihalcea, Smaranda Muresan, Ioannis Kakadiaris
This white paper presents a summary of the discussions regarding critical considerations to develop an extensive repository of online videos annotated with labels indicating questionable content.
no code implementations • COLING 2020 • Tariq Alhindi, Smaranda Muresan, Daniel Preotiuc-Pietro
A 2018 study led by the Media Insight Project showed that most journalists think that a clearmarking of what is news reporting and what is commentary or opinion (e. g., editorial, op-ed)is essential for gaining public trust.
no code implementations • EMNLP 2020 • Kasturi Bhattacharjee, Miguel Ballesteros, Rishita Anubhai, Smaranda Muresan, Jie Ma, Faisal Ladhak, Yaser Al-Onaizan
Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success.
1 code implementation • EMNLP 2020 • Tuhin Chakrabarty, Smaranda Muresan, Nanyun Peng
We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
no code implementations • WS 2020 • Debanjan Ghosh, Avijit Vajpayee, Smaranda Muresan
Detecting sarcasm and verbal irony is critical for understanding people's actual sentiments and beliefs.
1 code implementation • IJCNLP 2019 • Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy Mckeown, Alyssa Hwang
Our approach for relation prediction uses contextual information in terms of fine-tuning a pre-trained language model and leveraging discourse relations based on Rhetorical Structure Theory.
1 code implementation • 28 Apr 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
1 code implementation • ACL 2020 • Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, Smaranda Muresan
The increased focus on misinformation has spurred development of data and systems for detecting the veracity of a claim as well as retrieving authoritative evidence.
no code implementations • 3 Nov 2019 • Debanjan Ghosh, Elena Musi, Kartikeya Upasani, Smaranda Muresan
Human communication often involves the use of verbal irony or sarcasm, where the speakers usually mean the opposite of what they say.
no code implementations • WS 2019 • Tariq Alhindi, Jonas Pfeiffer, Smaranda Muresan
This paper presents the CUNLP submission for the NLP4IF 2019 shared-task on FineGrained Propaganda Detection.
no code implementations • CL 2018 • Debanjan Ghosh, Alexander R. Fabbri, Smaranda Muresan
To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the current turn.
no code implementations • 8 Jun 2018 • Elena Musi, Debanjan Ghosh, Smaranda Muresan
Drawing from a theoretically-informed typology of concessions, we conduct an annotation task to label a set of polysemous lexical markers as introducing an argumentative concession or not and we observe their distribution in threads that achieved and did not achieve persuasion.
no code implementations • 14 Apr 2018 • Debanjan Ghosh, Smaranda Muresan
Conversations in social media often contain the use of irony or sarcasm, when the users say the opposite of what they really mean.
2 code implementations • WS 2017 • Debanjan Ghosh, Alexander Richard Fabbri, Smaranda Muresan
To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the sarcastic response.