Search Results for author: Archiki Prasad

Found 9 papers, 6 papers with code

Soft Self-Consistency Improves Language Model Agents

1 code implementation20 Feb 2024 Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

Current "sample and select" methods such as self-consistency (SC) rely on majority voting to score answers.

Language Modelling valid

ReGAL: Refactoring Programs to Discover Generalizable Abstractions

1 code implementation29 Jan 2024 Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal

While large language models (LLMs) are increasingly being used for program synthesis, they lack the global view needed to develop useful abstractions; they generally predict programs one at a time, often repeating the same functionality.

Date Understanding Program Synthesis

ADaPT: As-Needed Decomposition and Planning with Language Models

no code implementations8 Nov 2023 Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot

Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment.

Decision Making

Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models

1 code implementation9 Oct 2023 Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

An increasing number of vision-language tasks can be handled with little to no training, i. e., in a zero and few-shot manner, by marrying large language models (LLMs) to vision encoders, resulting in large vision-language models (LVLMs).

Language Modelling Question Answering +2

ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness

1 code implementation21 Apr 2023 Archiki Prasad, Swarnadeep Saha, Xiang Zhou, Mohit Bansal

Multi-step reasoning ability is fundamental to many natural language tasks, yet it is unclear what constitutes a good reasoning chain and how to evaluate them.

Informativeness Natural Language Inference +1

GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models

2 code implementations14 Mar 2022 Archiki Prasad, Peter Hase, Xiang Zhou, Mohit Bansal

Providing natural language instructions in prompts is a useful new paradigm for improving task performance of large language models in a zero-shot setting.

The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding

no code implementations EMNLP (MRL) 2021 Archiki Prasad, Mohammad Ali Rehan, Shreya Pathak, Preethi Jyothi

In this work, we propose the use of bilingual intermediate pretraining as a reliable technique to derive large and consistent performance gains on three different NLP tasks using code-switched text.

Language Modelling Natural Language Inference +4

How Accents Confound: Probing for Accent Information in End-to-End Speech Recognition Systems

no code implementations ACL 2020 Archiki Prasad, Preethi Jyothi

We use a state-of-the-art end-to-end ASR system, comprising convolutional and recurrent layers, that is trained on a large amount of US-accented English speech and evaluate the model on speech samples from seven different English accents.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.