Search Results for author: Sungho Choi

Found 6 papers, 2 papers with code

Domain Adaptive Imitation Learning with Visual Observation

no code implementations NeurIPS 2023 Sungho Choi, Seungyul Han, Woojun Kim, Jongseong Chae, Whiyoung Jung, Youngchul Sung

In this paper, we consider domain-adaptive imitation learning with visual observation, where an agent in a target domain learns to perform a task by observing expert demonstrations in a source domain.

Image Reconstruction Imitation Learning

Robust Imitation Learning against Variations in Environment Dynamics

1 code implementation19 Jun 2022 Jongseong Chae, Seungyul Han, Whiyoung Jung, Myungsik Cho, Sungho Choi, Youngchul Sung

In this paper, we propose a robust imitation learning (IL) framework that improves the robustness of IL when environment dynamics are perturbed.

Imitation Learning

Adaptive Multi-model Fusion Learning for Sparse-Reward Reinforcement Learning

no code implementations1 Jan 2021 Giseung Park, Whiyoung Jung, Sungho Choi, Youngchul Sung

In this paper, we consider intrinsic reward generation for sparse-reward reinforcement learning based on model prediction errors.

reinforcement-learning Reinforcement Learning (RL)

Cross-Domain Imitation Learning with a Dual Structure

no code implementations2 Jun 2020 Sungho Choi, Seungyul Han, Woojun Kim, Youngchul Sung

In this paper, we consider cross-domain imitation learning (CDIL) in which an agent in a target domain learns a policy to perform well in the target domain by observing expert demonstrations in a source domain without accessing any reward function.

Imitation Learning

Model Ensemble-Based Intrinsic Reward for Sparse Reward Reinforcement Learning

no code implementations25 Sep 2019 Giseung Park, Whiyoung Jung, Sungho Choi, Youngchul Sung

In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.