Search Results for author: Kavosh Asadi

Found 19 papers, 8 papers with code

Learning the Target Network in Function Space

no code implementations3 Jun 2024 Kavosh Asadi, Yao Liu, Shoham Sabach, Ming Yin, Rasool Fakoor

We focus on the task of learning the value function in the reinforcement learning (RL) setting.

Reinforcement Learning (RL)

TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models

no code implementations9 Oct 2023 Zuxin Liu, Jesse Zhang, Kavosh Asadi, Yao Liu, Ding Zhao, Shoham Sabach, Rasool Fakoor

Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques -- e. g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA) -- in TAIL to adapt large pretrained models for new tasks with limited demonstration data.

Continual Learning Imitation Learning +1

Faster Deep Reinforcement Learning with Slower Online Network

1 code implementation10 Dec 2021 Kavosh Asadi, Rasool Fakoor, Omer Gottesman, Taesup Kim, Michael L. Littman, Alexander J. Smola

In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network.

Deep Reinforcement Learning reinforcement-learning +1

Coarse-Grained Smoothness for RL in Metric Spaces

no code implementations23 Oct 2021 Omer Gottesman, Kavosh Asadi, Cameron Allen, Sam Lobel, George Konidaris, Michael Littman

We propose a new coarse-grained smoothness definition that generalizes the notion of Lipschitz continuity, is more widely applicable, and allows us to compute significantly tighter bounds on Q-functions, leading to improved learning.

Decision Making

Continuous Doubly Constrained Batch Reinforcement Learning

1 code implementation NeurIPS 2021 Rasool Fakoor, Jonas Mueller, Kavosh Asadi, Pratik Chaudhari, Alexander J. Smola

Reliant on too many experiments to learn good actions, current Reinforcement Learning (RL) algorithms have limited applicability in real-world settings, which can be too expensive to allow exploration.

reinforcement-learning Reinforcement Learning +1

Learning State Abstractions for Transfer in Continuous Control

2 code implementations8 Feb 2020 Kavosh Asadi, David Abel, Michael L. Littman

In this work, we answer this question in the affirmative, where we take "simple learning algorithm" to be tabular Q-Learning, the "good representations" to be a learned state abstraction, and "challenging problems" to be continuous control tasks.

continuous-control Continuous Control +4

Lipschitz Lifelong Reinforcement Learning

1 code implementation15 Jan 2020 Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman

We consider the problem of knowledge transfer when an agent is facing a series of Reinforcement Learning (RL) tasks.

reinforcement-learning Reinforcement Learning +2

Combating the Compounding-Error Problem with a Multi-step Model

no code implementations30 May 2019 Kavosh Asadi, Dipendra Misra, Seungchan Kim, Michel L. Littman

In this paper, we address the compounding-error problem by introducing a multi-step model that directly outputs the outcome of executing a sequence of actions.

Model-based Reinforcement Learning reinforcement-learning +2

Mitigating Planner Overfitting in Model-Based Reinforcement Learning

no code implementations3 Dec 2018 Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman

An agent with an inaccurate model of its environment faces a difficult choice: it can ignore the errors in its model and act in the real world in whatever way it determines is optimal with respect to its model.

Model-based Reinforcement Learning Position +3

Towards a Simple Approach to Multi-step Model-based Reinforcement Learning

no code implementations31 Oct 2018 Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman

When environmental interaction is expensive, model-based reinforcement learning offers a solution by planning ahead and avoiding costly mistakes.

Model-based Reinforcement Learning reinforcement-learning +2

Lipschitz Continuity in Model-based Reinforcement Learning

1 code implementation ICML 2018 Kavosh Asadi, Dipendra Misra, Michael L. Littman

We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz.

Model-based Reinforcement Learning reinforcement-learning +2

Mean Actor Critic

2 code implementations1 Sep 2017 Cameron Allen, Kavosh Asadi, Melrose Roderick, Abdel-rahman Mohamed, George Konidaris, Michael Littman

We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning.

Atari Games reinforcement-learning +2

Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning

3 code implementations ACL 2017 Jason D. Williams, Kavosh Asadi, Geoffrey Zweig

End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.

reinforcement-learning Reinforcement Learning +1

Sample-efficient Deep Reinforcement Learning for Dialog Control

no code implementations18 Dec 2016 Kavosh Asadi, Jason D. Williams

Representing a dialog policy as a recurrent neural network (RNN) is attractive because it handles partial observability, infers a latent representation of state, and can be optimized with supervised learning (SL) or reinforcement learning (RL).

Deep Reinforcement Learning Policy Gradient Methods +2

An Alternative Softmax Operator for Reinforcement Learning

1 code implementation ICML 2017 Kavosh Asadi, Michael L. Littman

A softmax operator applied to a set of values acts somewhat like the maximization function and somewhat like an average.

Decision Making reinforcement-learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.