Search Results for author: Je-Hwan Ryu

Found 3 papers, 1 papers with code

Robust Imitation via Mirror Descent Inverse Reinforcement Learning

no code implementations20 Oct 2022 Dong-Sig Han, Hyunseo Kim, Hyundo Lee, Je-Hwan Ryu, Byoung-Tak Zhang

Recently, adversarial imitation learning has shown a scalable reward acquisition method for inverse reinforcement learning (IRL) problems.

Density Estimation Imitation Learning +2

Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning

1 code implementation NeurIPS 2021 Kibeom Kim, Min Whoo Lee, Yoonsung Kim, Je-Hwan Ryu, Minsu Lee, Byoung-Tak Zhang

Learning in a multi-target environment without prior knowledge about the targets requires a large amount of samples and makes generalization difficult.

reinforcement-learning Reinforcement Learning (RL) +1

Unbiased learning with State-Conditioned Rewards in Adversarial Imitation Learning

no code implementations1 Jan 2021 Dong-Sig Han, Hyunseo Kim, Hyundo Lee, Je-Hwan Ryu, Byoung-Tak Zhang

The formulation draws a strong connection between adversarial learning and energy-based reinforcement learning; thus, the architecture is capable of recovering a reward function that induces a multi-modal policy.

Continuous Control Imitation Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.