no code implementations • EMNLP 2021 • Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan
Moreover, multilingual fine-tuning on poetic data outperforms bilingual fine-tuning on poetic data.
1 code implementation • 24 May 2023 • Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan
We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6, 476 visual metaphors for 1, 540 linguistic metaphors and their associated visual elaborations.
1 code implementation • 23 May 2023 • Sky CH-Wang, Arkadiy Saakyan, Oliver Li, Zhou Yu, Smaranda Muresan
Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3, 069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations.
1 code implementation • 24 May 2022 • Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan
Figurative language understanding has been recently framed as a recognizing textual entailment (RTE) task (a. k. a.
1 code implementation • 7 Sep 2021 • Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan
Moreover, multilingual fine-tuning on poetic data outperforms \emph{bilingual} fine-tuning on poetic data.
1 code implementation • ACL 2021 • Arkadiy Saakyan, Tuhin Chakrabarty, Smaranda Muresan
The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence.