Search Results for author: Ramakanth Pasunuru

Found 40 papers, 19 papers with code

Continual Few-Shot Learning for Text Classification

1 code implementation EMNLP 2021 Ramakanth Pasunuru, Veselin Stoyanov, Mohit Bansal

In this work, we propose a continual few-shot learning (CFL) task, in which a system is challenged with a difficult phenomenon and asked to learn to correct mistakes with only a few (10 to 15) training examples.

continual few-shot learning Few-Shot Learning +4

An Overview of Uncertainty Calibration for Text Classification and the Role of Distillation

no code implementations ACL (RepL4NLP) 2021 Han Guo, Ramakanth Pasunuru, Mohit Bansal

Many recalibration methods have been proposed in the literature for quantifying predictive uncertainty and calibrating model outputs, with varying degrees of complexity.

text-classification Text Classification

Efficient Tool Use with Chain-of-Abstraction Reasoning

no code implementations30 Jan 2024 Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva, Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, Tianlu Wang

LLM agents trained with our method also show more efficient tool use, with inference speed being on average ~1. 4x faster than baseline tool-augmented LLMs.

Math Mathematical Reasoning +1

PathFinder: Guided Search over Multi-Step Reasoning Paths

no code implementations8 Dec 2023 Olga Golovneva, Sean O'Brien, Ramakanth Pasunuru, Tianlu Wang, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz

Using constrained reasoning, PathFinder integrates novel quality constraints, pruning, and exploration methods to enhance the efficiency and the quality of generation.

Pathfinder

Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading

no code implementations8 Oct 2023 Howard Chen, Ramakanth Pasunuru, Jason Weston, Asli Celikyilmaz

Large language models (LLMs) have advanced in large strides due to the effectiveness of the self-attention mechanism that processes and compares all tokens at once.

Question Answering Retrieval

Crystal: Introspective Reasoners Reinforced with Self-Feedback

1 code implementation7 Oct 2023 Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, Asli Celikyilmaz

Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized.

Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding

no code implementations26 Sep 2023 Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, Asli Celikyilmaz

The key idea is not to throw out the value network, a byproduct of PPO training for evaluating partial output sequences, when decoding text out of the policy network.

Text Generation

Shepherd: A Critic for Language Model Generation

1 code implementation8 Aug 2023 Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O'Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz

As large language models improve, there is increasing interest in techniques that leverage these models' capabilities to refine their own outputs.

Language Modelling

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

1 code implementation22 Dec 2022 Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov

To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks.

Language Modelling Meta-Learning +2

Complementary Explanations for Effective In-Context Learning

1 code implementation25 Nov 2022 Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, Ramakanth Pasunuru

Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts, but there has been limited understanding of exactly how these explanations function or why they are effective.

In-Context Learning

Efficient Large Scale Language Modeling with Mixtures of Experts

no code implementations20 Dec 2021 Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, Ves Stoyanov

This paper presents a detailed empirical study of how autoregressive MoE language models scale in comparison with dense models in a wide range of settings: in- and out-of-domain language modeling, zero- and few-shot priming, and full-shot fine-tuning.

Language Modelling

Proposition-Level Clustering for Multi-Document Summarization

2 code implementations NAACL 2022 Ori Ernst, Avi Caciularu, Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Jacob Goldberger, Ido Dagan

Text clustering methods were traditionally incorporated into multi-document summarization (MDS) as a means for coping with considerable information repetition.

Clustering Document Summarization +3

Multi-Document Keyphrase Extraction: Dataset, Baselines and Review

1 code implementation3 Oct 2021 Ori Shapira, Ramakanth Pasunuru, Ido Dagan, Yael Amsterdamer

Keyphrase extraction has been extensively researched within the single-document setting, with an abundance of methods, datasets and applications.

Keyphrase Extraction

Extending Multi-Document Summarization Evaluation to the Interactive Setting

1 code implementation NAACL 2021 Ori Shapira, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Yael Amsterdamer, Ido Dagan

In this paper, we develop an end-to-end evaluation framework for interactive summarization, focusing on expansion-based interaction, which considers the accumulating information along a user session.

Document Summarization Multi-Document Summarization

Data Augmentation for Abstractive Query-Focused Multi-Document Summarization

1 code implementation2 Mar 2021 Ramakanth Pasunuru, Asli Celikyilmaz, Michel Galley, Chenyan Xiong, Yizhe Zhang, Mohit Bansal, Jianfeng Gao

The progress in Query-focused Multi-Document Summarization (QMDS) has been limited by the lack of sufficient largescale high-quality training datasets.

Data Augmentation Document Summarization +1

Dual Reinforcement-Based Specification Generation for Image De-Rendering

no code implementations2 Mar 2021 Ramakanth Pasunuru, David Rosenberg, Gideon Mann, Mohit Bansal

Since these are sequence models, we must choose an ordering of the objects in the graphics programs for likelihood training.

Decoder Inductive Bias

DORB: Dynamically Optimizing Multiple Rewards with Bandits

no code implementations EMNLP 2020 Ramakanth Pasunuru, Han Guo, Mohit Bansal

Further, it is important to consider using a dynamic combination and curriculum of metric rewards that flexibly changes over time.

Data-to-Text Generation Question Generation +1

Evaluating Interactive Summarization: an Expansion-Based Framework

no code implementations17 Sep 2020 Ori Shapira, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Yael Amsterdamer, Ido Dagan

Allowing users to interact with multi-document summarizers is a promising direction towards improving and customizing summary results.

Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline

1 code implementation CoNLL (EMNLP) 2021 Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan

Aligning sentences in a reference summary with their counterparts in source documents was shown as a useful auxiliary summarization task, notably for generating training data for salience detection.

Clustering Document Summarization +1

Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits

no code implementations13 Jan 2020 Han Guo, Ramakanth Pasunuru, Mohit Bansal

Next, we develop a DistanceNet model which uses these distance measures, or a mixture of these distance measures, as an additional loss function to be minimized jointly with the task's loss function, so as to achieve better unsupervised domain adaptation.

General Classification Sentiment Analysis +3

Continual and Multi-Task Architecture Search

1 code implementation ACL 2019 Ramakanth Pasunuru, Mohit Bansal

Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task.

Continual Learning General Classification +8

AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning

no code implementations NAACL 2019 Han Guo, Ramakanth Pasunuru, Mohit Bansal

To address these issues, we present AutoSeM, a two-stage MTL pipeline, where the first stage automatically selects the most useful auxiliary tasks via a Beta-Bernoulli multi-armed bandit with Thompson Sampling, and the second stage learns the training mixing ratio of these selected auxiliary tasks via a Gaussian Process based Bayesian optimization framework.

Bayesian Optimization Inductive Bias +2

Game-Based Video-Context Dialogue

1 code implementation EMNLP 2018 Ramakanth Pasunuru, Mohit Bansal

Current dialogue systems focus more on textual and speech context knowledge and are usually based on two speakers.

Retrieval

Dynamic Multi-Level Multi-Task Learning for Sentence Simplification

no code implementations COLING 2018 Han Guo, Ramakanth Pasunuru, Mohit Bansal

In this work, we first present a strong pointer-copy mechanism based sequence-to-sequence sentence simplification model, and then improve its entailment and paraphrasing capabilities via multi-task learning with related auxiliary tasks of entailment and paraphrase generation.

Multi-Task Learning Paraphrase Generation +3

Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation

no code implementations ACL 2018 Han Guo, Ramakanth Pasunuru, Mohit Bansal

An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document.

Abstractive Text Summarization Decoder +3

Multi-Reward Reinforced Summarization with Saliency and Entailment

no code implementations NAACL 2018 Ramakanth Pasunuru, Mohit Bansal

Abstractive text summarization is the task of compressing and rewriting a long document into a short summary while maintaining saliency, directed logical entailment, and non-redundancy.

Abstractive Text Summarization Reinforcement Learning

Towards Improving Abstractive Summarization via Entailment Generation

no code implementations WS 2017 Ramakanth Pasunuru, Han Guo, Mohit Bansal

Abstractive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-to-sequence models.

Abstractive Text Summarization Decoder +3

Reinforced Video Captioning with Entailment Rewards

no code implementations EMNLP 2017 Ramakanth Pasunuru, Mohit Bansal

Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training.

reinforcement-learning Reinforcement Learning +3

Multi-Task Video Captioning with Video and Entailment Generation

no code implementations ACL 2017 Ramakanth Pasunuru, Mohit Bansal

Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data.

Decoder Multi-Task Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.