Search Results for author: Akshara Prabhakar

Found 12 papers, 11 papers with code

Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding

1 code implementation6 Nov 2024 Haolin Chen, Yihao Feng, Zuxin Liu, Weiran Yao, Akshara Prabhakar, Shelby Heinecke, Ricky Ho, Phil Mui, Silvio Savarese, Caiming Xiong, Huan Wang

Large language models (LLMs) have shown impressive capabilities, but still struggle with complex reasoning tasks requiring multiple steps.

ARC GSM8K

CRMArena: Understanding the Capacity of LLM Agents to Perform Professional CRM Tasks in Realistic Environments

1 code implementation4 Nov 2024 Kung-Hsiang Huang, Akshara Prabhakar, Sidharth Dhawan, Yixin Mao, Huan Wang, Silvio Savarese, Caiming Xiong, Philippe Laban, Chien-Sheng Wu

Customer Relationship Management (CRM) systems are vital for modern enterprises, providing a foundation for managing customer interactions and data.

LoRA Soups: Merging LoRAs for Practical Skill Composition Tasks

1 code implementation16 Oct 2024 Akshara Prabhakar, Yuanzhi Li, Karthik Narasimhan, Sham Kakade, Eran Malach, Samy Jelassi

We study how different LoRA modules can be merged to achieve skill composition -- testing the performance of the merged model on a target task that involves combining multiple skills, each skill coming from a single LoRA.

Math parameter-efficient fine-tuning

xLAM: A Family of Large Action Models to Empower AI Agent Systems

1 code implementation5 Sep 2024 JianGuo Zhang, Tian Lan, Ming Zhu, Zuxin Liu, Thai Hoang, Shirley Kokane, Weiran Yao, Juntao Tan, Akshara Prabhakar, Haolin Chen, Zhiwei Liu, Yihao Feng, Tulika Awalgaonkar, Rithesh Murthy, Eric Hu, Zeyuan Chen, ran Xu, Juan Carlos Niebles, Shelby Heinecke, Huan Wang, Silvio Savarese, Caiming Xiong

By releasing the xLAM series, we aim to advance the performance of open-source LLMs for autonomous AI agents, potentially accelerating progress and democratizing access to high-performance models for agent tasks.

AI Agent

Deciphering the Factors Influencing the Efficacy of Chain-of-Thought: Probability, Memorization, and Noisy Reasoning

1 code implementation1 Jul 2024 Akshara Prabhakar, Thomas L. Griffiths, R. Thomas McCoy

By focusing on a single relatively simple task, we are able to identify three factors that systematically affect CoT performance: the probability of the task's expected output (probability), what the model has implicitly learned during pre-training (memorization), and the number of intermediate operations involved in reasoning (noisy reasoning).

Memorization

InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback

2 code implementations NeurIPS 2023 John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao

Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation.

Benchmarking Code Generation +2

Commonsense and Named Entity Aware Knowledge Grounded Dialogue Generation

1 code implementation NAACL 2022 Deeksha Varshney, Akshara Prabhakar, Asif Ekbal

In this paper, we present a novel open-domain dialogue generation model which effectively utilizes the large-scale commonsense and named entity based knowledge in addition to the unstructured topic-specific knowledge associated with each utterance.

Dialogue Generation

CL-NERIL: A Cross-Lingual Model for NER in Indian Languages

1 code implementation23 Nov 2021 Akshara Prabhakar, Gouri Sankar Majumder, Ashish Anand

We employ a variant of the Teacher-Student model and optimize it jointly on the pseudo labels of the Teacher model and predictions on the generated weakly labeled data.

named-entity-recognition Named Entity Recognition +4

Cannot find the paper you are looking for? You can Submit a new open access paper.