no code implementations • Findings (EMNLP) 2021 • Faeze Brahman, Meng Huang, Oyvind Tafjord, Chao Zhao, Mrinmaya Sachan, Snigdha Chaturvedi
When reading a literary piece, readers often make inferences about various characters’ roles, personalities, relationships, intents, actions, etc.
1 code implementation • 23 Oct 2023 • Tenghao Huang, Ehsan Qasemi, Bangzheng Li, He Wang, Faeze Brahman, Muhao Chen, Snigdha Chaturvedi
Storytelling's captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies.
no code implementations • 17 Oct 2023 • Somnath Basu Roy Chowdhury, Nicholas Monath, Ahmad Beirami, Rahul Kidambi, Avinava Dubey, Amr Ahmed, Snigdha Chaturvedi
In the online setting, where the algorithm has access to a single instance at a time, estimating the group fairness objective requires additional storage and significantly more computation (e. g., forward/backward passes) than the task-specific objective at every time step.
1 code implementation • 2 Feb 2023 • Krzysztof Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii Likhosherstov, Somnath Basu Roy Chowdhury, Avinava Dubey, Deepali Jain, Tamas Sarlos, Snigdha Chaturvedi, Adrian Weller
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
no code implementations • 4 Dec 2022 • Faeze Brahman, Baolin Peng, Michel Galley, Sudha Rao, Bill Dolan, Snigdha Chaturvedi, Jianfeng Gao
We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages.
1 code implementation • 2 Dec 2022 • Chao Zhao, Faeze Brahman, Kaiqiang Song, Wenlin Yao, Dian Yu, Snigdha Chaturvedi
To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset.
no code implementations • 14 Nov 2022 • Yiyuan Li, Tong Che, Yezhen Wang, Zhengbao Jiang, Caiming Xiong, Snigdha Chaturvedi
In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for factual probing in PLMs that leverages the symmetry of the task by constructing symmetrical prompts for subject and object prediction.
no code implementations • 1 Nov 2022 • Anvesh Rao Vijjini, Faeze Brahman, Snigdha Chaturvedi
In this paper, we introduce the task of modeling interpersonal relationships for story generation.
no code implementations • 15 Sep 2022 • Somnath Basu Roy Chowdhury, Nicholas Monath, Avinava Dubey, Amr Ahmed, Snigdha Chaturvedi
We then use these representations to quantify the relevance of review sentences using a novel approximate geodesic distance based scoring mechanism.
1 code implementation • 25 Aug 2022 • Somnath Basu Roy Chowdhury, Snigdha Chaturvedi
In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting.
1 code implementation • Findings (NAACL) 2022 • Chao Zhao, Faeze Brahman, Tenghao Huang, Snigdha Chaturvedi
In particular, we hypothesize that the order of the input concepts can affect the PTM's ability to utilize its commonsense knowledge.
1 code implementation • Findings (ACL) 2022 • Chao Zhao, Tenghao Huang, Somnath Basu Roy Chowdhury, Muthu Kumar Chandrasekaran, Kathleen McKeown, Snigdha Chaturvedi
A common method for extractive multi-document news summarization is to re-formulate it as a single-document summarization problem by concatenating all documents as a single meta-document.
1 code implementation • ACL 2022 • Somnath Basu Roy Chowdhury, Chao Zhao, Snigdha Chaturvedi
A semantic unit is supposed to capture an abstract semantic concept.
1 code implementation • 31 Jan 2022 • Somnath Basu Roy Chowdhury, Snigdha Chaturvedi
Text representations learned by machine learning models often encode undesirable demographic information of the user.
1 code implementation • EMNLP (insights) 2021 • Somnath Basu Roy Chowdhury, Snigdha Chaturvedi
For this, we incorporate commonsense knowledge into the prediction process using a graph convolution network with pre-trained language model embeddings as input.
1 code implementation • EMNLP 2021 • Somnath Basu Roy Chowdhury, Sayan Ghosh, Yiyuan Li, Junier B. Oliva, Shashank Srivastava, Snigdha Chaturvedi
Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task.
1 code implementation • Findings (EMNLP) 2021 • Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi
Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.
no code implementations • 12 Sep 2021 • Faeze Brahman, Meng Huang, Oyvind Tafjord, Chao Zhao, Mrinmaya Sachan, Snigdha Chaturvedi
When reading a literary piece, readers often make inferences about various characters' roles, personalities, relationships, intents, actions, etc.
1 code implementation • ACL 2021 • Sayan Ghosh, Zheng Qi, Snigdha Chaturvedi, Shashank Srivastava
Many approaches to this problem use Reinforcement Learning (RL), which maximizes a single manually defined reward, such as BLEU.
1 code implementation • EMNLP 2021 • Somnath Basu Roy Chowdhury, Faeze Brahman, Snigdha Chaturvedi
We perform evaluations in a zero-shot setting, showcasing that our model is able to generalize well across other datasets.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Faeze Brahman, Alexandru Petrusca, Snigdha Chaturvedi
Previous approaches in this domain have focused largely on one-shot generation, where a language model outputs a complete story based on limited initial input from a user.
1 code implementation • EMNLP 2020 • Faeze Brahman, Snigdha Chaturvedi
Emotions and their evolution play a central role in creating a captivating story.
no code implementations • ACL 2020 • Chao Zhao, Marilyn Walker, Snigdha Chaturvedi
Generating sequential natural language descriptions from graph-structured data (e. g., knowledge graph) is challenging, partly because of the structural differences between the input graph and the output text.
no code implementations • ACL 2020 • Alex Rinaldi, Jean Fox Tree, Snigdha Chaturvedi
Accurately diagnosing depression is difficult{--} requiring time-intensive interviews, assessments, and analysis.
1 code implementation • 22 Nov 2019 • Chao Zhao, Snigdha Chaturvedi
Opinion summarization from online product reviews is a challenging task, which involves identifying opinions related to various aspects of the product being reviewed.
no code implementations • CONLL 2019 • Stephen Mayhew, Snigdha Chaturvedi, Chen-Tse Tsai, Dan Roth
Supervised machine learning assumes the availability of fully-labeled data, but in many cases, such as low-resource languages, the only data available is partially annotated.
no code implementations • NAACL 2018 • Snigdha Chaturvedi, Shashank Srivastava, Dan Roth
People can identify correspondences between narratives in everyday life.
no code implementations • NAACL 2018 • Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, Dan Roth
We present a reading comprehension challenge in which questions can only be answered by taking into account information from multiple sentences.
no code implementations • EMNLP 2017 • Snigdha Chaturvedi, Haoruo Peng, Dan Roth
Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense.
Ranked #12 on
Question Answering
on Story Cloze
no code implementations • CONLL 2017 • Haoruo Peng, Snigdha Chaturvedi, Dan Roth
Understanding stories {--} sequences of events {--} is a crucial yet challenging natural language understanding task.
no code implementations • 1 Dec 2015 • Shashank Srivastava, Snigdha Chaturvedi, Tom Mitchell
In this work, we address the problem of inferring the polarity of relationships between people in narrative summaries.
no code implementations • 30 Nov 2015 • Snigdha Chaturvedi, Dan Goldwasser, Hal Daume III
The ability to comprehend wishes or desires and their fulfillment is important to Natural Language Understanding.
no code implementations • 30 Nov 2015 • Snigdha Chaturvedi, Shashank Srivastava, Hal Daume III, Chris Dyer
Studying characters plays a vital role in computationally representing and interpreting narratives.