Search Results for author: Pradeep Dasigi

Found 27 papers, 14 papers with code

How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources

1 code implementation7 Jun 2023 Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, Hannaneh Hajishirzi

Our evaluations show that the best model in any given evaluation reaches on average 83% of ChatGPT performance, and 68% of GPT-4 performance, suggesting that further investment in building better base models and instruction-tuning data is required to close the gap.

Instruction Following

Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval

no code implementations19 May 2023 Revanth Gangi Reddy, Pradeep Dasigi, Md Arafat Sultan, Arman Cohan, Avirup Sil, Heng Ji, Hannaneh Hajishirzi

Neural information retrieval often adopts a retrieve-and-rerank framework: a bi-encoder network first retrieves K (e. g., 100) candidates that are then re-ranked using a more powerful cross-encoder model to rank the better candidates higher.

Information Retrieval Retrieval

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

1 code implementation30 Jan 2023 Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, Kyle Lo

Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores?

AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization

1 code implementation2 Dec 2022 Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi

We propose AGRO -- Adversarial Group discovery for Distributionally Robust Optimization -- an end-to-end approach that jointly identifies error-prone groups and improves accuracy on them.


Data-Efficient Finetuning Using Cross-Task Nearest Neighbors

1 code implementation1 Dec 2022 Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi

Obtaining labeled data to train a model for a task of interest is often expensive.

Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

1 code implementation ACL 2022 Yuxiang Wu, Matt Gardner, Pontus Stenetorp, Pradeep Dasigi

We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data.

Natural Language Inference

Learning with Instance Bundles for Reading Comprehension

no code implementations EMNLP 2021 Dheeru Dua, Pradeep Dasigi, Sameer Singh, Matt Gardner

When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other.

Reading Comprehension

Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization

1 code implementation EMNLP 2021 Ansong Ni, Matt Gardner, Pradeep Dasigi

We also show that retrieval marginalization results in 4. 1 QA F1 improvement over a non-marginalized baseline on HotpotQA in the fullwiki setting.

Question Answering Retrieval

IIRC: A Dataset of Incomplete Information Reading Comprehension Questions

no code implementations EMNLP 2020 James Ferguson, Matt Gardner, Hannaneh Hajishirzi, Tushar Khot, Pradeep Dasigi

However, most existing reading comprehension (RC) tasks only focus on questions for which the contexts provide all the information required to answer them, thus not evaluating a system's performance at identifying a potential lack of sufficient information and locating sources for that information.

Reading Comprehension

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis

Iterative Search for Weakly Supervised Semantic Parsing

no code implementations NAACL 2019 Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard Hovy

Training semantic parsers from question-answer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer.

Semantic Parsing Visual Reasoning

Neural Semantic Parsing

no code implementations ACL 2018 Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, Luke Zettlemoyer

Semantic parsing, the study of translating natural language utterances into machine-executable programs, is a well-established research area and has applications in question answering, instruction following, voice assistants, and code generation.

Code Generation Instruction Following +4

Ontology-Aware Token Embeddings for Prepositional Phrase Attachment

1 code implementation ACL 2017 Pradeep Dasigi, Waleed Ammar, Chris Dyer, Eduard Hovy

Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language.

Prepositional Phrase Attachment Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.