Search Results for author: Dheeraj Rajagopal

Found 27 papers, 10 papers with code

Think about it! Improving defeasible reasoning by first modeling the question scenario.

1 code implementation EMNLP 2021 Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard Hovy

Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.

Confidence Calibration and Rationalization for LLMs via Multi-Agent Deliberation

no code implementations14 Apr 2024 Ruixin Yang, Dheeraj Rajagopal, Shirley Anugrah Hayati, Bin Hu, Dongyeop Kang

Uncertainty estimation is a significant issue for current large language models (LLMs) that are generally poorly calibrated and over-confident, especially with reinforcement learning from human feedback (RLHF).

How Far Can We Extract Diverse Perspectives from Large Language Models?

1 code implementation16 Nov 2023 Shirley Anugrah Hayati, Minhwa Lee, Dheeraj Rajagopal, Dongyeop Kang

In this study, we investigate LLMs' capacity for generating diverse perspectives and rationales on subjective topics, such as social norms and argumentative texts.

Sentence Sentence Embeddings +1

StyLEx: Explaining Style Using Human Lexical Annotations

1 code implementation14 Oct 2022 Shirley Anugrah Hayati, Kyumin Park, Dheeraj Rajagopal, Lyle Ungar, Dongyeop Kang

Large pre-trained language models have achieved impressive results on various style classification tasks, but they often learn spurious domain-specific words to make predictions (Hayati et al., 2021).

Sentence

Template Filling for Controllable Commonsense Reasoning

no code implementations31 Oct 2021 Dheeraj Rajagopal, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Andrew Fano, Eduard Hovy

To enable better controllability, we propose to study the commonsense reasoning as a template filling task (TemplateCSR) -- where the language models fills reasoning templates with the given constraints as control factors.

Multiple-choice

Think about it! Improving defeasible reasoning by first modeling the question scenario

1 code implementation24 Oct 2021 Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard Hovy

Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.

Improving Neural Model Performance through Natural Language Feedback on Their Explanations

no code implementations18 Apr 2021 Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Yiming Yang, Peter Clark, Keisuke Sakaguchi, Ed Hovy

A class of explainable NLP models for reasoning tasks support their decisions by generating free-form or structured explanations, but what happens when these supporting structures contain errors?

A Dataset for Tracking Entities in Open Domain Procedural Text

no code implementations EMNLP 2020 Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi Mishra, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, Eduard Hovy

Our solution is a new task formulation where given just a procedural text as input, the task is to generate a set of state change tuples(entity, at-tribute, before-state, after-state)for each step, where the entity, attribute, and state values must be predicted from an open vocabulary.

Attribute

StructSum: Summarization via Structured Representations

1 code implementation EACL 2021 Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, Yulia Tsvetkov

To this end, we propose incorporating latent and explicit dependencies across sentences in the source document into end-to-end single-document summarization models.

Abstractive Text Summarization Document Summarization +1

Modeling the Relationship between User Comments and Edits in Document Revision

no code implementations IJCNLP 2019 Xuchao Zhang, Dheeraj Rajagopal, Michael Gamon, Sujay Kumar Jauhar, Chang-Tien Lu

Thus, in this paper we explore the relationship between comments and edits by defining two novel, related tasks: Comment Ranking and Edit Anchoring.

Management

Simple and Effective Semi-Supervised Question Answering

no code implementations NAACL 2018 Bhuwan Dhingra, Danish Pruthi, Dheeraj Rajagopal

Recent success of deep learning models for the task of extractive Question Answering (QA) is hinged on the availability of large annotated corpora.

Extractive Question-Answering Question Answering +1

Gated-Attention Architectures for Task-Oriented Language Grounding

1 code implementation22 Jun 2017 Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov

To perform tasks specified by natural language instructions, autonomous agents need to extract semantically meaningful representations of language and map it to visual elements and actions in the environment.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.