Search Results for author: Gheorghe Comanici

Found 6 papers, 3 papers with code

Learning how to Interact with a Complex Interface using Hierarchical Reinforcement Learning

no code implementations21 Apr 2022 Gheorghe Comanici, Amelia Glaese, Anita Gergely, Daniel Toyama, Zafarali Ahmed, Tyler Jackson, Philippe Hamel, Doina Precup

While the native action space is completely intractable for simple DQN agents, our architecture can be used to establish an effective way to interact with different tasks, significantly improving the performance of the same DQN agent over different levels of abstraction.

Hierarchical Reinforcement Learning reinforcement-learning +1

Temporally Abstract Partial Models

1 code implementation NeurIPS 2021 Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, Doina Precup

Humans and animals have the ability to reason and make predictions about different courses of action at many time scales.

The Option Keyboard: Combining Skills in Reinforcement Learning

no code implementations NeurIPS 2019 André Barreto, Diana Borsa, Shaobo Hou, Gheorghe Comanici, Eser Aygün, Philippe Hamel, Daniel Toyama, Jonathan Hunt, Shibl Mourad, David Silver, Doina Precup

Building on this insight and on previous results on transfer learning, we show how to approximate options whose cumulants are linear combinations of the cumulants of known options.

Management reinforcement-learning +2

What can I do here? A Theory of Affordances in Reinforcement Learning

1 code implementation ICML 2020 Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel, Doina Precup

Gibson (1977) coined the term "affordances" to describe the fact that certain states enable an agent to do certain actions, in the context of embodied agents.

reinforcement-learning Reinforcement Learning (RL)

Basis refinement strategies for linear value function approximation in MDPs

no code implementations NeurIPS 2015 Gheorghe Comanici, Doina Precup, Prakash Panangaden

We provide a theoretical framework for analyzing basis function construction for linear value function approximation in Markov Decision Processes (MDPs).

Cannot find the paper you are looking for? You can Submit a new open access paper.