Search Results for author: Vaishnavi Shrivastava

Found 5 papers, 1 papers with code

Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation

no code implementations15 Nov 2023 Vaishnavi Shrivastava, Percy Liang, Ananya Kumar

To maintain user trust, large language models (LLMs) should signal low confidence on examples where they are incorrect, instead of misleading the user.

Question Answering

Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs

1 code implementation8 Nov 2023 Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot

Our experiments with ChatGPT-3. 5 show that this bias is ubiquitous - 80% of our personas demonstrate bias; it is significant - some datasets show performance drops of 70%+; and can be especially harmful for certain groups - some personas suffer statistically significant drops on 80%+ of the datasets.

Fairness Math

Benchmarking and Improving Generator-Validator Consistency of Language Models

no code implementations3 Oct 2023 Xiang Lisa Li, Vaishnavi Shrivastava, Siyan Li, Tatsunori Hashimoto, Percy Liang

To improve the consistency of LMs, we propose to finetune on the filtered generator and validator responses that are GV-consistent, and call this approach consistency fine-tuning.

Benchmarking Instruction Following +1

Exploring Low-Cost Transformer Model Compression for Large-Scale Commercial Reply Suggestions

no code implementations27 Nov 2021 Vaishnavi Shrivastava, Radhika Gaonkar, Shashank Gupta, Abhishek Jha

Fine-tuning pre-trained language models improves the quality of commercial reply suggestion systems, but at the cost of unsustainable training times.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.