no code implementations • NeurIPS 2023 • Sungho Choi, Seungyul Han, Woojun Kim, Jongseong Chae, Whiyoung Jung, Youngchul Sung
In this paper, we consider domain-adaptive imitation learning with visual observation, where an agent in a target domain learns to perform a task by observing expert demonstrations in a source domain.
1 code implementation • 19 Jun 2022 • Jongseong Chae, Seungyul Han, Whiyoung Jung, Myungsik Cho, Sungho Choi, Youngchul Sung
In this paper, we propose a robust imitation learning (IL) framework that improves the robustness of IL when environment dynamics are perturbed.
1 code implementation • 10 Dec 2021 • Giseung Park, Sungho Choi, Youngchul Sung
This paper proposes a new sequential model learning architecture to solve partially observable Markov decision problems.
Partially Observable Reinforcement Learning reinforcement-learning +1
no code implementations • 1 Jan 2021 • Giseung Park, Whiyoung Jung, Sungho Choi, Youngchul Sung
In this paper, we consider intrinsic reward generation for sparse-reward reinforcement learning based on model prediction errors.
no code implementations • 2 Jun 2020 • Sungho Choi, Seungyul Han, Woojun Kim, Youngchul Sung
In this paper, we consider cross-domain imitation learning (CDIL) in which an agent in a target domain learns a policy to perform well in the target domain by observing expert demonstrations in a source domain without accessing any reward function.
no code implementations • 25 Sep 2019 • Giseung Park, Whiyoung Jung, Sungho Choi, Youngchul Sung
In this paper, a new intrinsic reward generation method for sparse-reward reinforcement learning is proposed based on an ensemble of dynamics models.