Search Results for author: Abhilasha Ravichander

Found 13 papers, 7 papers with code

NoiseQA: Challenge Set Evaluation for User-Centric Question Answering

2 code implementations EACL 2021 Abhilasha Ravichander, Siddharth Dalmia, Maria Ryskina, Florian Metze, Eduard Hovy, Alan W Black

When Question-Answering (QA) systems are deployed in the real world, users query them through a variety of interfaces, such as speaking to voice assistants, typing questions into a search engine, or even translating questions to languages supported by the QA system.

Question Answering

Measuring and Improving Consistency in Pretrained Language Models

1 code implementation1 Feb 2021 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg

In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?

Pretrained Language Models

On the Systematicity of Probing Contextualized Word Representations: The Case of Hypernymy in BERT

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.

Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?

no code implementations EACL 2021 Abhilasha Ravichander, Yonatan Belinkov, Eduard Hovy

Although neural models have achieved impressive results on several NLP benchmarks, little is understood about the mechanisms they use to perform language tasks.

Natural Language Inference Word Embeddings

Stress Test Evaluation for Natural Language Inference

1 code implementation COLING 2018 Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, Graham Neubig

Natural language inference (NLI) is the task of determining if a natural language hypothesis can be inferred from a given premise in a justifiable manner.

Natural Language Inference Natural Language Understanding

Preserving Intermediate Objectives: One Simple Trick to Improve Learning for Hierarchical Models

no code implementations23 Jun 2017 Abhilasha Ravichander, Shruti Rijhwani, Rajat Kulshreshtha, Chirag Nagpal, Tadas Baltrušaitis, Louis-Philippe Morency

In this work, we focus on improving learning for such hierarchical models and demonstrate our method on the task of speaker trait prediction.

Cannot find the paper you are looking for? You can Submit a new open access paper.