1 code implementation • 21 Apr 2023 • Archiki Prasad, Swarnadeep Saha, Xiang Zhou, Mohit Bansal
Multi-step reasoning ability is fundamental to many natural language tasks, yet it is unclear what constitutes a good reasoning chain and how to evaluate them.
no code implementations • 16 Dec 2022 • Swarnadeep Saha, Xinyan Velocity Yu, Mohit Bansal, Ramakanth Pasunuru, Asli Celikyilmaz
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
1 code implementation • 14 Nov 2022 • Swarnadeep Saha, Peter Hase, Nazneen Rajani, Mohit Bansal
We observe that (1) GPT-3 explanations are as grammatical as human explanations regardless of the hardness of the test samples, (2) for easy examples, GPT-3 generates highly supportive explanations but human explanations are more generalizable, and (3) for hard examples, human explanations are significantly better than GPT-3 explanations both in terms of label-supportiveness and generalizability judgements.
1 code implementation • 21 Sep 2022 • Swarnadeep Saha, Shiyue Zhang, Peter Hase, Mohit Bansal
We demonstrate that SP-Search effectively represents the generative process behind human summaries using modules that are typically faithful to their intended behavior.
1 code implementation • ACL 2022 • Swarnadeep Saha, Prateek Yadav, Mohit Bansal
In this work, we study pre-trained language models that generate explanation graphs in an end-to-end manner and analyze their ability to learn the structural constraints and semantics of such graphs.
1 code implementation • NAACL 2021 • Swarnadeep Saha, Prateek Yadav, Mohit Bansal
In order to jointly learn from all proof graphs and exploit the correlations between multiple proofs for a question, we pose this task as a set generation problem over structured output spaces where each proof is represented as a directed graph.
1 code implementation • EMNLP 2021 • Swarnadeep Saha, Prateek Yadav, Lisa Bauer, Mohit Bansal
Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context.
1 code implementation • EMNLP 2020 • Swarnadeep Saha, Yixin Nie, Mohit Bansal
Reasoning about conjuncts in conjunctive sentences is important for a deeper understanding of conjunctions in English and also how their usages and semantics differ from conjunctive and disjunctive boolean logic.
2 code implementations • EMNLP 2020 • Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, Mohit Bansal
First, PROVER generates proofs with an accuracy of 87%, while retaining or improving performance on the QA task, compared to RuleTakers (up to 6% improvement on zero-shot evaluation).
no code implementations • IJCNLP 2019 • Chul Sung, Tejas Dhamecha, Swarnadeep Saha, Tengfei Ma, Vinay Reddy, Rishi Arora
Pre-trained BERT contextualized representations have achieved state-of-the-art results on multiple downstream NLP tasks by fine-tuning with task-specific data.
no code implementations • WS 2019 • Sneha Mondal, Tejas Dhamecha, Shantanu Godbole, Smriti Pathak, Red Mendoza, K Gayathri Wijayarathna, Nabil Zary, Swarnadeep Saha, Malolan Chetlur
A typical medical curriculum is organized in a hierarchy of instructional objectives called Learning Outcomes (LOs); a few thousand LOs span five years of study.
no code implementations • 25 Feb 2019 • Swarnadeep Saha, Tejas I. Dhamecha, Smit Marvaniya, Peter Foltz, Renuka Sindhgatta, Bikram Sengupta
On a large-scale industry dataset and a benchmarking dataset, we show that our model performs significantly better than existing techniques which either learn domain-specific models or adapt a generic similarity scoring model from a large corpus.
no code implementations • COLING 2018 • Swarnadeep Saha, {Mausam}
We demonstrate the value of our coordination analyzer in the end task of Open Information Extraction (Open IE).
Ranked #18 on
Open Information Extraction
on CaRB
no code implementations • ACL 2017 • Swarnadeep Saha, Harinder Pal, {Mausam}
We design and release BONIE, the first open numerical relation extractor, for extracting Open IE tuples where one of the arguments is a number or a quantity-unit phrase.