Search Results for author: Dipendra Misra

Found 33 papers, 13 papers with code

Provable Interactive Learning with Hindsight Instruction Feedback

no code implementations14 Apr 2024 Dipendra Misra, Aldo Pacchiano, Robert E. Schapire

We study interactive learning in a setting where the agent has to generate a response (e. g., an action or trajectory) given a context and an instruction.

Dataset Reset Policy Optimization for RLHF

2 code implementations12 Apr 2024 Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Kianté Brantley, Dipendra Misra, Jason D. Lee, Wen Sun

Motivated by the fact that offline preference dataset provides informative states (i. e., data that is preferred by the labelers), our new algorithm, Dataset Reset Policy Optimization (DR-PO), integrates the existing offline preference dataset into the online policy training procedure via dataset reset: it directly resets the policy optimizer to the states in the offline dataset, instead of always starting from the initial state distribution.

Reinforcement Learning (RL)

Towards Principled Representation Learning from Videos for Reinforcement Learning

no code implementations20 Mar 2024 Dipendra Misra, Akanksha Saran, Tengyang Xie, Alex Lamb, John Langford

We study two types of settings: one where there is iid noise in the observation, and a more challenging setting where there is also the presence of exogenous noise, which is non-iid noise that is temporally correlated, such as the motion of people or cars in the background.

Contrastive Learning reinforcement-learning +1

Policy Improvement using Language Feedback Models

no code implementations12 Feb 2024 Victor Zhong, Dipendra Misra, Xingdi Yuan, Marc-Alexandre Côté

We introduce Language Feedback Models (LFMs) that identify desirable behaviour - actions that help achieve tasks specified in the instruction - for imitation learning in instruction following.

Behavioural cloning Instruction Following

The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction

1 code implementation21 Dec 2023 Pratyusha Sharma, Jordan T. Ash, Dipendra Misra

Transformer-based Large Language Models (LLMs) have become a fixture in modern machine learning.

LLF-Bench: Benchmark for Interactive Learning from Language Feedback

no code implementations11 Dec 2023 Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, Adith Swaminathan

We introduce a new benchmark, LLF-Bench (Learning from Language Feedback Benchmark; pronounced as "elf-bench"), to evaluate the ability of AI agents to interactively learn from natural language feedback and instructions.

Information Retrieval OpenAI Gym

Learning to Generate Better Than Your LLM

1 code implementation20 Jun 2023 Jonathan D. Chang, Kiante Brantley, Rajkumar Ramamurthy, Dipendra Misra, Wen Sun

In particular, we extend RL algorithms to allow them to interact with a dynamic black-box guide LLM and propose RL with guided feedback (RLGF), a suite of RL algorithms for LLM fine-tuning.

Conditional Text Generation reinforcement-learning +1

Towards Data-Driven Offline Simulations for Online Reinforcement Learning

1 code implementation14 Nov 2022 Shengpu Tang, Felipe Vieira Frujeri, Dipendra Misra, Alex Lamb, John Langford, Paul Mineiro, Sebastian Kochman

Modern decision-making systems, from robots to web recommendation engines, are expected to adapt: to user preferences, changing circumstances or even new tasks.

Decision Making reinforcement-learning +1

Agent-Controller Representations: Principled Offline RL with Rich Exogenous Information

1 code implementation31 Oct 2022 Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, John Langford

We find that contemporary representation learning techniques can fail on datasets where the noise is a complex and time dependent process, which is prevalent in practical applications.

Offline RL Reinforcement Learning (RL) +1

Provable Safe Reinforcement Learning with Binary Feedback

1 code implementation26 Oct 2022 Andrew Bennett, Dipendra Misra, Nathan Kallus

Many existing approaches to safe RL rely on receiving numeric safety feedback, but in many cases this feedback can only take binary values; that is, whether an action in a given state is safe or unsafe.

Active Learning reinforcement-learning +2

Guaranteed Discovery of Control-Endogenous Latent States with Multi-Step Inverse Models

no code implementations17 Jul 2022 Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, John Langford

In many sequential decision-making tasks, the agent is not able to model the full complexity of the world, which consists of multitudes of relevant and irrelevant information.

Decision Making

Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information

no code implementations9 Jun 2022 Yonathan Efroni, Dylan J. Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford

In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.

reinforcement-learning Reinforcement Learning (RL)

Provably Sample-Efficient RL with Side Information about Latent Dynamics

no code implementations27 May 2022 Yao Liu, Dipendra Misra, Miro Dudík, Robert E. Schapire

We study reinforcement learning (RL) in settings where observations are high-dimensional, but where an RL agent has access to abstract knowledge about the structure of the state space, as is the case, for example, when a robot is tasked to go to a specific room in a building using observations from its own camera, while having access to the floor plan.

reinforcement-learning Reinforcement Learning (RL) +1

Understanding Contrastive Learning Requires Incorporating Inductive Biases

no code implementations28 Feb 2022 Nikunj Saunshi, Jordan Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham Kakade, Akshay Krishnamurthy

Contrastive learning is a popular form of self-supervised learning that encourages augmentations (views) of the same input to have more similar representations compared to augmentations of different inputs.

Contrastive Learning Self-Supervised Learning

Provable RL with Exogenous Distractors via Multistep Inverse Dynamics

no code implementations17 Oct 2021 Yonathan Efroni, Dipendra Misra, Akshay Krishnamurthy, Alekh Agarwal, John Langford

We initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the Exogenous Block MDP (EX-BMDP), for rich observation RL.

Reinforcement Learning (RL) Representation Learning

Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics

no code implementations ICLR 2022 Yonathan Efroni, Dipendra Misra, Akshay Krishnamurthy, Alekh Agarwal, John Langford

We initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the Exogenous Block MDP (EX-BMDP), for rich observation RL.

Reinforcement Learning (RL) Representation Learning

Interactive Learning from Activity Description

1 code implementation13 Feb 2021 Khanh Nguyen, Dipendra Misra, Robert Schapire, Miro Dudík, Patrick Shafto

We present a novel interactive learning protocol that enables training request-fulfilling agents by verbally describing their activities.

General Reinforcement Learning Grounded language learning +2

Provable Rich Observation Reinforcement Learning with Combinatorial Latent States

no code implementations ICLR 2021 Dipendra Misra, Qinghua Liu, Chi Jin, John Langford

We propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects).

Contrastive Learning reinforcement-learning +1

Learning the Linear Quadratic Regulator from Nonlinear Observations

no code implementations NeurIPS 2020 Zakaria Mhammedi, Dylan J. Foster, Max Simchowitz, Dipendra Misra, Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, John Langford

We introduce a new algorithm, RichID, which learns a near-optimal policy for the RichLQR with sample complexity scaling only with the dimension of the latent state space and the capacity of the decoder function class.

Continuous Control

Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning

no code implementations ICML 2020 Dipendra Misra, Mikael Henaff, Akshay Krishnamurthy, John Langford

We present an algorithm, HOMER, for exploration and reinforcement learning in rich observation environments that are summarizable by an unknown latent state space.

reinforcement-learning Reinforcement Learning (RL) +1

Combating the Compounding-Error Problem with a Multi-step Model

no code implementations30 May 2019 Kavosh Asadi, Dipendra Misra, Seungchan Kim, Michel L. Littman

In this paper, we address the compounding-error problem by introducing a multi-step model that directly outputs the outcome of executing a sequence of actions.

Model-based Reinforcement Learning reinforcement-learning +1

Early Fusion for Goal Directed Robotic Vision

no code implementations21 Nov 2018 Aaron Walsman, Yonatan Bisk, Saadia Gabriel, Dipendra Misra, Yoav Artzi, Yejin Choi, Dieter Fox

Building perceptual systems for robotics which perform well under tight computational budgets requires novel architectures which rethink the traditional computer vision pipeline.

Imitation Learning Retrieval

Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction

1 code implementation10 Nov 2018 Valts Blukis, Dipendra Misra, Ross A. Knepper, Yoav Artzi

We propose an approach for mapping natural language instructions and raw observations to continuous control of a quadcopter drone.

Continuous Control Imitation Learning +2

Towards a Simple Approach to Multi-step Model-based Reinforcement Learning

no code implementations31 Oct 2018 Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman

When environmental interaction is expensive, model-based reinforcement learning offers a solution by planning ahead and avoiding costly mistakes.

Model-based Reinforcement Learning reinforcement-learning +1

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations

no code implementations EMNLP 2018 Dipendra Misra, Ming-Wei Chang, Xiaodong He, Wen-tau Yih

Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e. g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm.

Question Answering Semantic Parsing

Lipschitz Continuity in Model-based Reinforcement Learning

1 code implementation ICML 2018 Kavosh Asadi, Dipendra Misra, Michael L. Littman

We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz.

Model-based Reinforcement Learning reinforcement-learning +1

CHALET: Cornell House Agent Learning Environment

2 code implementations23 Jan 2018 Claudia Yan, Dipendra Misra, Andrew Bennnett, Aaron Walsman, Yonatan Bisk, Yoav Artzi

We present CHALET, a 3D house simulator with support for navigation and manipulation.

Cannot find the paper you are looking for? You can Submit a new open access paper.