Search Results for author: Samin Yeasar Arnob

Found 7 papers, 1 papers with code

Offline Policy Optimization in RL with Variance Regularizaton

no code implementations29 Dec 2022 Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Animesh Garg, Zhaoran Wang, Lihong Li, Doina Precup

Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications.

Continuous Control Offline RL +1

Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning

no code implementations31 Dec 2021 Samin Yeasar Arnob, Riashat Islam, Doina Precup

We hypothesize that empirically studying the sample complexity of offline reinforcement learning (RL) is crucial for the practical applications of RL in the real world.

Offline RL reinforcement-learning +1

Offline Policy Optimization with Variance Regularization

no code implementations1 Jan 2021 Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Zhaoran Wang, Animesh Garg, Lihong Li, Doina Precup

Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications.

Continuous Control Offline RL +1

Off-Policy Adversarial Inverse Reinforcement Learning

1 code implementation ICML Workshop LifelongML 2020 Samin Yeasar Arnob

Adversarial Imitation Learning (AIL) is a class of algorithms in Reinforcement learning (RL), which tries to imitate an expert without taking any reward from the environment and does not provide expert behavior directly to the policy training.

Continuous Control Imitation Learning +3

Doubly Robust Off-Policy Actor-Critic Algorithms for Reinforcement Learning

no code implementations11 Dec 2019 Riashat Islam, Raihan Seraj, Samin Yeasar Arnob, Doina Precup

Furthermore, in cases where the reward function is stochastic that can lead to high variance, doubly robust critic estimation can improve performance under corrupted, stochastic reward signals, indicating its usefulness for robust and safe reinforcement learning.

Continuous Control reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.