no code implementations • NAACL (SocialNLP) 2021 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Discrepancies exist among different cultures or languages.
no code implementations • EMNLP 2021 • Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan
Moreover, multilingual fine-tuning on poetic data outperforms bilingual fine-tuning on poetic data.
1 code implementation • 24 May 2023 • Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan
We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6, 476 visual metaphors for 1, 540 linguistic metaphors and their associated visual elaborations.
no code implementations • 24 Jan 2023 • Tariq Alhindi, Tuhin Chakrabarty, Elena Musi, Smaranda Muresan
To move towards solving the fallacy recognition task, we approach these differences across datasets as multiple tasks and show how instruction-based prompting in a multitask setup based on the T5 model improves the results against approaches built for a specific dataset such as T5, BERT or GPT-3.
1 code implementation • 25 Oct 2022 • Tuhin Chakrabarty, Vishakh Padmakumar, He He
The core component of our system is a language model fine-tuned on a diverse collection of instructions for poetry writing.
1 code implementation • 20 Oct 2022 • Tuhin Chakrabarty, Justin Lewis, Smaranda Muresan
Recent work on question generation has largely focused on factoid questions such as who, what, where, when about basic facts.
1 code implementation • 24 May 2022 • Thomas Scialom, Tuhin Chakrabarty, Smaranda Muresan
In spite of the limited success of Continual Learning we show that Language Models can be continual learners.
1 code implementation • 24 May 2022 • Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan
Figurative language understanding has been recently framed as a recognizing textual entailment (RTE) task (a. k. a.
1 code implementation • EMNLP 2021 • Tuhin Chakrabarty, Aadit Trivedi, Smaranda Muresan
Enthymemes are defined as arguments where a premise or conclusion is left implicit.
1 code implementation • 7 Sep 2021 • Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan
Moreover, multilingual fine-tuning on poetic data outperforms \emph{bilingual} fine-tuning on poetic data.
1 code implementation • 31 Aug 2021 • Tuhin Chakrabarty, Yejin Choi, Vered Shwartz
Figurative language is ubiquitous in English.
1 code implementation • ACL 2021 • Arkadiy Saakyan, Tuhin Chakrabarty, Smaranda Muresan
The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence.
1 code implementation • ACL 2021 • Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, Iryna Gurevych
Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions.
1 code implementation • Findings (ACL) 2021 • Tuhin Chakrabarty, Debanjan Ghosh, Adam Poliak, Smaranda Muresan
We introduce a collection of recognizing textual entailment (RTE) datasets focused on figurative language.
1 code implementation • NAACL 2021 • Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, Nanyun Peng
Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning.
no code implementations • NAACL 2021 • Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan
Framing involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman 1983).
no code implementations • NAACL 2021 • Sarik Ghazarian, Zixi Liu, Tuhin Chakrabarty, Xuezhe Ma, Aram Galstyan, Nanyun Peng
Having engaging and informative conversations with users is the utmost goal for open-domain conversational systems.
1 code implementation • EMNLP 2020 • Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, Nanyun Peng
Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.
1 code implementation • EMNLP 2020 • Tuhin Chakrabarty, Smaranda Muresan, Nanyun Peng
We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
no code implementations • ACL 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smar Muresan, a, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
1 code implementation • IJCNLP 2019 • Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy Mckeown, Alyssa Hwang
Our approach for relation prediction uses contextual information in terms of fine-tuning a pre-trained language model and leveraging discourse relations based on Rhetorical Structure Theory.
1 code implementation • 28 Apr 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
1 code implementation • ACL 2020 • Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, Smaranda Muresan
The increased focus on misinformation has spurred development of data and systems for detecting the veracity of a claim as well as retrieving authoritative evidence.
1 code implementation • 10 Apr 2020 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Perspective differences exist among different cultures or languages.
no code implementations • WS 2019 • Siddharth Varia, Christopher Hidey, Tuhin Chakrabarty
Word pairs across argument spans have been shown to be effective for predicting the discourse relation between them.
1 code implementation • WS 2019 • Tuhin Chakrabarty, Kilol Gupta, Smar Muresan, a
The goal of any social media platform is to facilitate healthy and meaningful interactions among its users.
no code implementations • SEMEVAL 2019 • Tuhin Chakrabarty, Smar Muresan, a
Community Question Answering forums are very popular nowadays, as they represent effective means for communities to share information around particular topics.
1 code implementation • NAACL 2019 • Tuhin Chakrabarty, Christopher Hidey, Kathleen McKeown
Claims are the central component of an argument.
1 code implementation • WS 2018 • Tuhin Chakrabarty, Tariq Alhindi, Smar Muresan, a
Our team finished 6th out of 24 teams on the leader-board based on the preliminary results with a FEVER score of 49. 06 on the blind test set compared to 27. 45 of the baseline system.
no code implementations • 24 Sep 2018 • Tuhin Chakrabarty, Kilol Gupta
The original goal of any social media platform is to facilitate users to indulge in healthy and meaningful conversations.