Search Results for author: Haitham Bou-Ammar

Found 10 papers, 4 papers with code

Reinforcement Learning in Presence of Discrete Markovian Context Evolution

no code implementations ICLR 2022 Hang Ren, Aivar Sootla, Taher Jafferjee, Junxiao Shen, Jun Wang, Haitham Bou-Ammar

We consider a context-dependent Reinforcement Learning (RL) setting, which is characterized by: a) an unknown finite number of not directly observable contexts; b) abrupt (discontinuous) context changes occurring during an episode; and c) Markovian context evolution.

reinforcement-learning Variational Inference

Saute RL: Almost Surely Safe Reinforcement Learning Using State Augmentation

1 code implementation14 Feb 2022 Aivar Sootla, Alexander I. Cowen-Rivers, Taher Jafferjee, Ziyan Wang, David Mguni, Jun Wang, Haitham Bou-Ammar

Satisfying safety constraints almost surely (or with probability one) can be critical for the deployment of Reinforcement Learning (RL) in real-life applications.

reinforcement-learning Safe Reinforcement Learning

Compositional ADAM: An Adaptive Compositional Solver

no code implementations10 Feb 2020 Rasul Tutunov, Minne Li, Alexander I. Cowen-Rivers, Jun Wang, Haitham Bou-Ammar

In this paper, we present C-ADAM, the first adaptive solver for compositional problems involving a non-linear functional nesting of expected values.


Learning High-level Representations from Demonstrations

no code implementations19 Feb 2018 Garrett Andersen, Peter Vrancx, Haitham Bou-Ammar

A common approach to HL, is to provide the agent with a number of high-level skills that solve small parts of the overall problem.

Montezuma's Revenge

Balancing Two-Player Stochastic Games with Soft Q-Learning

no code implementations9 Feb 2018 Jordi Grau-Moya, Felix Leibfried, Haitham Bou-Ammar

Within the context of video games the notion of perfectly rational agents can be undesirable as it leads to uninteresting situations, where humans face tough adversarial decision makers.

Q-Learning reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.