Search Results for author: Feryal Behbahani

Found 13 papers, 1 papers with code

Hierarchical Reinforcement Learning in Complex 3D Environments

no code implementations28 Feb 2023 Bernardo Avila Pires, Feryal Behbahani, Hubert Soyer, Kyriacos Nikiforou, Thomas Keck, Satinder Singh

Hierarchical Reinforcement Learning (HRL) agents have the potential to demonstrate appealing capabilities such as planning and exploration with abstraction, transfer, and skill reuse.

Hierarchical Reinforcement Learning reinforcement-learning +1

Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality

no code implementations26 May 2022 Tom Zahavy, Yannick Schroecker, Feryal Behbahani, Kate Baumli, Sebastian Flennerhag, Shaobo Hou, Satinder Singh

Finding different solutions to the same problem is a key aspect of intelligence associated with creativity and adaptation to novel situations.

Model-Value Inconsistency as a Signal for Epistemic Uncertainty

no code implementations8 Dec 2021 Angelos Filos, Eszter Vértes, Zita Marinho, Gregory Farquhar, Diana Borsa, Abram Friesen, Feryal Behbahani, Tom Schaul, André Barreto, Simon Osindero

Unlike prior work which estimates uncertainty by training an ensemble of many models and/or value functions, this approach requires only the single model and value function which are already being learned in most model-based reinforcement learning algorithms.

Model-based Reinforcement Learning Rolling Shutter Correction

Learning Compositional Neural Programs for Continuous Control

no code implementations27 Jul 2020 Thomas Pierrot, Nicolas Perrin, Feryal Behbahani, Alexandre Laterre, Olivier Sigaud, Karim Beguir, Nando de Freitas

Third, the self-models are harnessed to learn recursive compositional programs with multiple levels of abstraction.

Continuous Control

Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation

no code implementations18 Dec 2019 Tianhong Dai, Kai Arulkumaran, Tamara Gerbert, Samyakh Tukra, Feryal Behbahani, Anil Anthony Bharath

Furthermore, even with an improved saliency method introduced in this work, we show that qualitative studies may not always correspond with quantitative measures, necessitating the combination of inspection tools in order to provide sufficient insights into the behaviour of trained agents.

reinforcement-learning Reinforcement Learning (RL)

Modular Meta-Learning with Shrinkage

no code implementations NeurIPS 2020 Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew W. Hoffman, Nando de Freitas

Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task-specific components.

Image Classification Meta-Learning +2

Learning from Demonstration in the Wild

no code implementations8 Nov 2018 Feryal Behbahani, Kyriacos Shiarlis, Xi Chen, Vitaly Kurin, Sudhanshu Kasewa, Ciprian Stirbu, João Gomes, Supratik Paul, Frans A. Oliehoek, João Messias, Shimon Whiteson

Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical.

Cannot find the paper you are looking for? You can Submit a new open access paper.