Search Results for author: Harsh Satija

Found 6 papers, 3 papers with code

A Survey of Exploration Methods in Reinforcement Learning

no code implementations1 Sep 2021 Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, Doina Precup

Exploration is an essential component of reinforcement learning algorithms, where agents need to learn how to predict and control unknown and often stochastic environments.

reinforcement-learning Reinforcement Learning (RL)

Locally Persistent Exploration in Continuous Control Tasks with Sparse Rewards

1 code implementation26 Dec 2020 Susan Amin, Maziar Gomrokchi, Hossein Aboutalebi, Harsh Satija, Doina Precup

A major challenge in reinforcement learning is the design of exploration strategies, especially for environments with sparse reward structures and continuous state and action spaces.

Continuous Control

Constrained Markov Decision Processes via Backward Value Functions

no code implementations ICML 2020 Harsh Satija, Philip Amortila, Joelle Pineau

In standard RL, the agent is incentivized to explore any behavior as long as it maximizes rewards, but in the real world, undesired behavior can damage either the system or the agent in a way that breaks the learning process itself.

Reinforcement Learning (RL)

Randomized Value Functions via Multiplicative Normalizing Flows

1 code implementation6 Jun 2018 Ahmed Touati, Harsh Satija, Joshua Romoff, Joelle Pineau, Pascal Vincent

In particular, we augment DQN and DDPG with multiplicative normalizing flows in order to track a rich approximate posterior distribution over the parameters of the value function.

Efficient Exploration Thompson Sampling

Decoupling Dynamics and Reward for Transfer Learning

1 code implementation27 Apr 2018 Amy Zhang, Harsh Satija, Joelle Pineau

Current reinforcement learning (RL) methods can successfully learn single tasks but often generalize poorly to modest perturbations in task domain or training procedure.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.