Search Results for author: Nitish Joshi

Found 10 papers, 8 papers with code

Personas as a Way to Model Truthfulness in Language Models

no code implementations27 Oct 2023 Nitish Joshi, Javier Rando, Abulhair Saparov, Najoung Kim, He He

This allows the model to separate truth from falsehoods and controls the truthfulness of its generation.

Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples

1 code implementation NeurIPS 2023 Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, He He

Given the intractably large size of the space of proofs, any model that is capable of general deductive reasoning must generalize to proofs of greater complexity.

Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations

1 code implementation22 May 2023 Chenglei Si, Dan Friedman, Nitish Joshi, Shi Feng, Danqi Chen, He He

We investigate the inductive biases of ICL from the perspective of feature bias: which feature ICL is more likely to use given a set of underspecified demonstrations in which two features are equally predictive of the labels.

In-Context Learning Inductive Bias

Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens

1 code implementation25 Oct 2022 Nitish Joshi, Xiang Pan, He He

In case (i), we want the model to be invariant to the feature, which is neither necessary nor sufficient for prediction.

Negation

Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation

no code implementations4 Oct 2022 Aahlad Puli, Nitish Joshi, He He, Rajesh Ranganath

In prediction tasks, there exist features that are related to the label in the same way across different settings for that task; these are semantic features or semantics.

Data Augmentation Natural Language Inference

QuALITY: Question Answering with Long Input Texts, Yes!

2 code implementations NAACL 2022 Richard Yuanzhe Pang, Alicia Parrish, Nitish Joshi, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, Samuel R. Bowman

To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5, 000 tokens, much longer than typical current models can process.

Multiple-choice Multiple Choice Question Answering (MCQA)

An Investigation of the (In)effectiveness of Counterfactually Augmented Data

1 code implementation ACL 2022 Nitish Joshi, He He

While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data.

Natural Language Understanding

Coupled Training of Sequence-to-Sequence Models for Accented Speech Recognition

1 code implementation14 May 2020 Vinit Unni, Nitish Joshi, Preethi Jyothi

We propose coupled training for encoder-decoder ASR models that acts on pairs of utterances corresponding to the same text spoken by speakers with different accents.

Accented Speech Recognition Automatic Speech Recognition +2

Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension

1 code implementation ACL 2019 Yichen Jiang, Nitish Joshi, Yen-Chun Chen, Mohit Bansal

Multi-hop reading comprehension requires the model to explore and connect relevant information from multiple sentences/documents in order to answer the question about the context.

Multi-Hop Reading Comprehension Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.