Search Results for author: Abhilasha Ravichander

Found 25 papers, 15 papers with code

Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?

1 code implementation19 Feb 2024 Nishant Balepur, Abhilasha Ravichander, Rachel Rudinger

We hope to motivate the use of stronger baselines in MCQA benchmarks, the design of robust MCQA datasets, and further efforts to explain LLM decision-making.

Decision Making Memorization +2

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

no code implementations4 Dec 2023 Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi

We analyze the effect of alignment tuning by examining the token distribution shift between base LLMs and their aligned counterpart.

In-Context Learning

Agent Lumos: Unified and Modular Training for Open-Source Language Agents

1 code implementation9 Nov 2023 Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, Bill Yuchen Lin

To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks.

Math Question Answering

What's In My Big Data?

1 code implementation31 Oct 2023 Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr, Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, Hanna Hajishirzi, Noah A. Smith, Jesse Dodge

We open-source WIMBD's code and artifacts to provide a standard set of evaluations for new text-based corpora and to encourage more analyses and transparency around them.

Benchmarking

The Generative AI Paradox: "What It Can Create, It May Not Understand"

no code implementations31 Oct 2023 Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

Specifically, we propose and test the Generative AI Paradox hypothesis: generative models, having been trained directly to reproduce expert-like outputs, acquire generative capabilities that are not contingent upon -- and can therefore exceed -- their ability to understand those same types of outputs.

CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

1 code implementation1 Nov 2022 Abhilasha Ravichander, Matt Gardner, Ana Marasović

We also have workers make three kinds of edits to the passage -- paraphrasing the negated statement, changing the scope of the negation, and reversing the negation -- resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts.

Natural Language Understanding Negation +1

Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions

no code implementations28 Jul 2022 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg

Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.

NoiseQA: Challenge Set Evaluation for User-Centric Question Answering

2 code implementations EACL 2021 Abhilasha Ravichander, Siddharth Dalmia, Maria Ryskina, Florian Metze, Eduard Hovy, Alan W Black

When Question-Answering (QA) systems are deployed in the real world, users query them through a variety of interfaces, such as speaking to voice assistants, typing questions into a search engine, or even translating questions to languages supported by the QA system.

Question Answering

Measuring and Improving Consistency in Pretrained Language Models

1 code implementation1 Feb 2021 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg

In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?

On the Systematicity of Probing Contextualized Word Representations: The Case of Hypernymy in BERT

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.

Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?

no code implementations EACL 2021 Abhilasha Ravichander, Yonatan Belinkov, Eduard Hovy

Although neural models have achieved impressive results on several NLP benchmarks, little is understood about the mechanisms they use to perform language tasks.

Natural Language Inference Sentence +1

Stress Test Evaluation for Natural Language Inference

1 code implementation COLING 2018 Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, Graham Neubig

Natural language inference (NLI) is the task of determining if a natural language hypothesis can be inferred from a given premise in a justifiable manner.

Natural Language Inference Natural Language Understanding +1

Preserving Intermediate Objectives: One Simple Trick to Improve Learning for Hierarchical Models

no code implementations23 Jun 2017 Abhilasha Ravichander, Shruti Rijhwani, Rajat Kulshreshtha, Chirag Nagpal, Tadas Baltrušaitis, Louis-Philippe Morency

In this work, we focus on improving learning for such hierarchical models and demonstrate our method on the task of speaker trait prediction.

Persuasiveness

Cannot find the paper you are looking for? You can Submit a new open access paper.