Search Results for author: Rae Jeong

Found 8 papers, 2 papers with code

Learning Dexterous Manipulation from Suboptimal Experts

no code implementations16 Oct 2020 Rae Jeong, Jost Tobias Springenberg, Jackie Kay, Daniel Zheng, Yuxiang Zhou, Alexandre Galashov, Nicolas Heess, Francesco Nori

Although in many cases the learning process could be guided by demonstrations or other suboptimal experts, current RL algorithms for continuous action spaces often fail to effectively utilize combinations of highly off-policy expert data and on-policy exploration data.

Offline RL Q-Learning

Importance Weighted Policy Learning and Adaptation

no code implementations10 Sep 2020 Alexandre Galashov, Jakub Sygnowski, Guillaume Desjardins, Jan Humplik, Leonard Hasenclever, Rae Jeong, Yee Whye Teh, Nicolas Heess

The ability to exploit prior experience to solve novel problems rapidly is a hallmark of biological learning systems and of great practical importance for artificial ones.

Meta Reinforcement Learning reinforcement-learning +1

Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer

no code implementations21 Oct 2019 Rae Jeong, Jackie Kay, Francesco Romano, Thomas Lampe, Tom Rothorl, Abbas Abdolmaleki, Tom Erez, Yuval Tassa, Francesco Nori

Learning robotic control policies in the real world gives rise to challenges in data efficiency, safety, and controlling the initial condition of the system.

reinforcement-learning Reinforcement Learning (RL)

Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation

no code implementations21 Oct 2019 Rae Jeong, Yusuf Aytar, David Khosid, Yuxiang Zhou, Jackie Kay, Thomas Lampe, Konstantinos Bousmalis, Francesco Nori

In this work, we learn a latent state representation implicitly with deep reinforcement learning in simulation, and then adapt it to the real domain using unlabeled real robot data.

Domain Adaptation reinforcement-learning +1

Robust Reinforcement Learning for Continuous Control with Model Misspecification

no code implementations ICLR 2020 Daniel J. Mankowitz, Nir Levine, Rae Jeong, Yuanyuan Shi, Jackie Kay, Abbas Abdolmaleki, Jost Tobias Springenberg, Timothy Mann, Todd Hester, Martin Riedmiller

We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms.

Continuous Control reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.